You can send me a message anonymously here: https://ââwww.admonymous.co/ââwill
WilliamKielyđ¸
Any update?
Thanks! Are many of the ~14B new chicks each year coming from a relatively small number of breeding hens who have many offspring? Or is it mostly 2 chicks per hen?
4:30 âMost of the worldâs 8 billion egg-laying hens, roughly one for every person alive on Earth today, are confined right now in cages like these.â
4:56 âThe egg industry has no need for the 7 billion male chicks born annually, so it kills them on their first day alive in this world.âJust to check my understanding of the numbers here:
Google tells me âEgg-laying hens on factory farms typically live for 12 to 18 months, or about a year and a half, before they are slaughtered when their egg production begins to decline.â
So I guess once each year on average the population of 8 billion egg-laying hens is replaced by a new population of 8 billion egg-laying hens. So the breeding hens are having about 15 billion chicks annually (7 billion male, 8 billion female). (Broiler chickens for meat are separate.)
How would you evaluate the cost-effectiveness of Writing Doom â Award-Winning Short Film on Superintelligence (2024) in this framework?
(More info on the filmâs creation in the FLI interview: Suzy Shepherd on Imagining Superintelligence and âWriting Doomâ)
It received 507,000 views, and at 27 minutes long, if the average viewer watched 1â3 of it, then thatâs 507,000*27*1/â3=4,563,000 VM.I donât recall whether the $20,000 Grand Prize it received was enough to reimburse Suzy for her cost to produce it and pay for her time, but if so, thatâd be 4,563,000VM/â$20,000=228 VM/â$.
Not sure how to do the quality adjustment using the OP framework, but naively my intuition is that short films like this one are more effective at changing peoplesâ minds on the importance of the problem per minute than average videos of the AI Safety channels. How valuable it is probably depends mostly on how valuable shifting the opinion of the general public is. Itâs not a video that Iâd expect to create AI safety researchers, but I expect it did help shift the Overton window on AI risk.
Drew Spartzâs channel: https://ââwww.youtube.com/ââ@AISpecies
Footnote 2 completely changes the meaning of the statement from common sense interpretations of the statement. It makes it so that e.g. a future scenario in which AI takes over and causes existential catastrophe and the extinction of biological humans this century does not count as extinction, so long as the AI continues to exist. As such, I chose to ignore it with my âfairly strongly agreeâ answer.
I endorse this for non-EA vegans who arenât willing to donate the money to wherever it will do the most good in general, but as my other comments have pointed out if a person (vegan or non-vegan) is willing to donate the money to wherever it will so the most good then they should just do that rather than donate it for the purpose of offsetting.
Per my top-level comment citing Claire Zabelâs post Ethical offsetting is antithetical to EA, offsetting past consumption seems worse than just donating that money to wherever it will do the most good in general.
I see youâve taken the 10% Pledge, so I gather youâre willing to donate effectively.
While you might feel better if you both donate X% to wherever you believe it will do the most good and $Y to the best animal charities to offset your past animal consumption, I think you instead ought to just donate X%+$Y to wherever it will do the most good.
NB: Maybe you happen to think the best giving opportunity to help animals is the best giving opportunity in general, but if not then my claim is that your offsetting behavior is a mistake.
This seems like a useful fundraising tool to target people who are unwilling to give their money to wherever it will do the most good, but I think it should be flagged that if a person is willing to donate their money to wherever it will do the most good then they should do that rather than donate to the best animal giving opportunities for the purpose of ethical offsetting. See Ethical offsetting is antithetical to EA.
FYI you can edit your original comment to add this in.
Iâm now over 20 minutes in and havenât quite figured out what youâre looking for. Just to dump my thoughtsânot necessarily looking for a response:
On the one hand it says âOur goal is to discover creative ways to use AI for Fermi estimationâ but on the other hand it says âAI tools to generate said estimates arenât required, but we expect them to help.â
From the Evaluation Rubric, âmodel qualityâ is only 20%, so it seems like the primary goal is neither to create a good âmodelâ (which I understand to mean a particular method for making a Fermi estimate on a particular question) nor to see if AI tools can be used to create such models.
The largest score (40%) is whether the *result* of the model that is created (i.e. the actual estimate that the model spits out with the numbers put into it) is surprising or not, with more surprising being better. But itâs unclear to me if the estimate actually needs to be believed or not for it to be surprising. Extreme numbers could just mean that the output is bad or wrong and not that the output should be evidence of anything.
I donât see any particular reason to believe the means to obtain that knowledge existed and was used when you canât tell me what that might look like, never mind how a small number of apparently resource-poor people obtained itâŚ
I wasnât a particularly informed forecaster, so me not telling you what information would have been sufficient to justify a rational 65+% confidence in Trump winning shouldnât be much evidence to you about the practicality of a very informed person reaching 65+% credence rationally. Identifying what information would have been sufficient is a very time-intensive, costly project, and given I hadnât done it already I wasnât about to go spend months researching the data that people in principle had access to that might have led to a >65% forecast just to answer your question.
Prior to the election, I had an inside view credence of 65% that Trump would win, but considered myself relatively uninformed and so I meta-updated on election models and betting market prices to be more uncertain, making my all-things-considered view closer to 50â50. As I wrote on November 4th:My 2â10 low information inside view judgment is that Trump is about 65% likely to win PA and the election. My all-things-considered view is basically 50%.
However, notably, after about 10 hours of thinking about who will win in the last week, I donât know if I actually trust Nate and prediction markets to be doing a good job. I suspect that there may be well-informed people in the world who *know* that the markets are wrong and have justified âtrueâ beliefs that one candidate is >65% likely to win. Such people presumably have a lot of money on the line, but not enough to more [sic] the market prices far from 50%.So I held this suspicion before the election, and I hold it still. I think itâs likely that such forecasters with rational credences of 65+% Trump victory did exist, and even if they didnât, I think itâs possible that they could have existed if more people cared more about finding out the truth of who would win.
So I think the suggestion that there was some way to get to 65-90% certainty which apparently nobody was willing to either make with substantial evidence or cash in on to any significant extent is a pretty extraordinary claim...
Iâm skeptical that nobody was rationally (i.e. not overconfidentally) at >65% belief Trump would win before election day. Presumably a lot of people holding Yes and buying Yes when Polymarket was at ~60% Trump believed Trump was >65% likey to win, right? And presumably a lot of them cashed in for a lot of money. What makes you think nobody was at >65% without being overconfident?
Iâll grant the French whale was overconfident, since it seems very plausible that he was overconfident, though I donât know that for sure, but that doesnât mean everyone >65% was overconfident.
Iâll also note that just because the market was at ~60% (or whatever precisely) does not mean that there could not have been people participating in the market who were significantly more confident yhat Trump would win and rationally-so.
The information that we gained between then and 1 week before the election was that the election remained close
Iâm curious if by âremained closeâ you meant âremained close to 50/â50âł?
(The two are distinct, and I was guilty of pattern-matching â~50/â50â to âcloseâ even though ~50/â50 could have meant that either Trump or Harris was likely to win by a lot (e.g. swing all 7 swing states) and we just had no idea which was more likely.)
Could you say more about âpractically possibleâ?
Yeah. I said some about that in the ACX thread in an exchange with a Jeffrey Soreff here. Initially I was talking about a âmaximally informedâ forecaster/âtrader, but then when Jeffrey pointed out that that term was ill-defined, I realized that I had a lower-bar level of informed in mind that was more practically possible than some notions of âmaximally informed.â
What steps do you think one could have taken to have reached, say, a 70% credence?
Basically just steps to become more informed and steps to have better judgment. (Saying specifically what knowledge would be sufficient to be able to form a forecast of 70% seems borderline impossible or at least extremely difficult.)
Before the election I was skeptical that people like Nate Silver and his team and The Economistâs election modeling team were actually doing as good a job as they could have been[1] forecasting whoâd win the election and now post-election I still remain skeptical that their forecasts were close to being the best they could have been.
[1] âdoing as good a job as they could have beenâ meaning I think they would have made substantially better forecasts in expectation (lower Brier scores in expectation) if figuring out who was going to win was really important to them (significantly more important than it actually was), and if they didnât care about the blowback for being âwrongâ if they made a confident wrong-side-of-maybe forecast, and if they were given a big budget to use to do research and acquire information (e.g. $10M), and if they were highly skilled forecasters with great judgment (like the best in the world but not superhuman (maybe Nate Silver is close to thisâIDK; I read his book The Signal and the Noise, but it seems plausible that there could still be substantial room for him to improve his forecasting skill)).
Note that I also made five Manifold Markets questions to also help evaluate my PA election model (Harris and Trump means and SDs) and the claim that PA is ~35% likely to be decisive.
Will Pennsylvania be decisive in the 2024 Presidential Election?
How many votes will Donald Trump receive in Pennsylvania? (Set)
How many votes will Donald Trump receive in Pennsylvania? (Multiple Choice)
How many votes will Kamala Harris receive in Pennsylvania? (Set)
How many votes will Kamala Harris receive in Pennsylvania? (Multiple Choice)
(Note: I accidentally resolved my Harris questions (#4 & #5) to the range of 3,300,000-3,399,999 rather than 3,400,000-3,499,999. Hopefully the mods will unresolve and correct this for me per my comments on the questions.)
This exercise wasnât too useful as there werenât enough other people participating in the markets to significantly move the prices from my initial beliefs. But I suppose thatâs evidence that they didnât think I was significantly wrong.
Before the election I made a poll asking âHow much would you pay (of your money, in USD) to increase the probability that Kamala Harris wins the 2024 Presidential Election by 0.0001% (i.e. 1â1,000,000 or 1-in-a-million)?â
You can see 12 answers from rationalist/âEA people after submitting your answer to the poll or jumping straight to the results.
I think elections tend to have low aleatoric uncertainty, and that our uncertain forecasts are usually almost entirely due to high epistemic uncertainty. (The 2000 Presidential election may be an exception where aleatoric uncertainty is significant. Very close elections can have high aleatoric uncertainty.)
I think Trump was actually very likely to win the 2024 election as of a few days before the election, and we just didnât know that.
Contra Scott Alexander, I think betting markets were priced too low, rather than too high. (See my (unfortunately verbose) comments on Scottâs post Congrats To Polymarket, But I Still Think They Were Mispriced.)
I think some people may have reduced their epistemic certainty significantly and had justified beliefs (not overconfident beliefs) that Trump was ~65-90% likely to win.
I totally am willing to believe that the French whale was not one of those people and actually just got lucky.
But I do think that becoming informed enough to rationally obtain a credence of >65% Trump was practically possible.
Thanks for writing up this post, @Eric Neyman . Iâm just finding it now, but want to share some of my thoughts while theyâre still fresh in my mind before next election season.
This means that one extra vote for Harris in Pennsylvania is worth 0.3 ÎźH. Or put otherwise, the probability that she wins the election increases by 1 in 3.4 million
My independent estimate from the week before the election was that Harris getting one extra vote in PA would increase her chance of winning the presidential election by about 1 in 874,000.
My methodology was to forecast the number of votes that Harris and Trump would each receive in PA, calculate the probability of a tie in PA given my probability distributions for the number of votes they would each get, then multiply the probability of the PA tie by the probability that PA is decisive (conditional on a tie).
I used normal distributions to model the expected number of votes Harris and Trump would each get for simplicity so I could easily model the outcomes in Google Sheets (even though my credence/âPDF did not perfectly match a normal distribution). These were my parameters:
Harris Mean Harris SD Trump Mean Trump SD 3450000 80000 3480000 90000 Simulating 10,000 elections in Google Sheets with these normal distributions found that about 654 elections per 10,000 were within 20,000 votes, which translates to a 1 in ~306,000 chance of PA being tied. I then multiplied this by a ~35%[1] chance that PA would be decisive (conditional on it being tied), to get a 1 in ~874,000 chance of an extra vote for Harris in PA changing who won overall.
99% of the votes in PA are in right now, with the totals currently at: 3,400,854 for Harris and 3,530,234 for Trump.
This means that the vote totals for Harris and Trump are both within 1 standard deviation of my mean expectation. Harris is about half an SD low and Trump was about half an SD high.From this, itâs not clear that my SDs of 80,000 votes for Harris and 90,000 votes for Trump were too narrow, as your (or Nateâs) model expected.
So I think my 1 in ~874,000 of an extra vote for Harris determining the president might have been more reasonable than your 1 in ~3.4 million.[1] Note: I mistakenly privately thought models and prediction markets might be wrong about the chance of PA being decisive, and thought that maybe it was closer to ~50% rather than ~25-35%, but the reason I thought this was bad and I didnât realize until after the election: I made a simple pattern matching mistake by assuming âthe election is a toss-upâ meant âit will be a close electionâ. I failed to consider other possibilities like âthe election will not be close, but we just donât know which side will win by a lot.â (In retrospect, this was a very silly mistake for me to make, especially since I had seen that as of some late-October date The Economist said that the two most likely outcomes of the 128 swing-state combinations was 20% that Trump swings all seven and 7% that Harris swings all 7.)
I was reminded of this comment of mine today and just thought Iâd comment again to note that 5 years later my social-impact career prospects have gotten even worse. Concretely, I quit my last job (a sales job just for money) a year and a half ago and have just been living on savings since without applying anywhere in 18 months. Things definitely have not gone as I had hoped when I wrote this comment. Some things have gotten better in life (e.g. my mental health is better than it has been in over five years), but career-wise Iâm doing very poorly (not even earning money with a random job) and have no positive trajectory to speak of.