This could work in my version if such a review didn’t exist and you wanted to just say, “hey I think this’d be valuable for someone to do!” :)
Phib
I was thinking like open research questions, like this post and its links https://forum.effectivealtruism.org/posts/dRXugrXDwfcj8C2Pv/what-are-some-lists-of-open-questions-in-effective-altruism Although a number of these are probably outdated and I wouldn’t want to limit what could be added to such a tab. Generally, Questions people have that would be worth answering with regard to effective altruism.
So that if someone had some free time and/or wanted to practice answering such a question, you could go to this tab. Maybe on the forum home page. Maybe answers could then be linked to questions and potentially crossed off. Maybe eventually bounties to certain questions could be added if a person or org wants a / another take on a question.
Worth having some sort of running and contributable-to tab for open questions? Can also encourage people to flag open questions they see in posts.
Did the research experience help you be a better manager and operator from within research organizations?
I feel like getting an understanding by doing some research could be helpful and probably you could gain generalizable/transferable skills but I’m just speculating here.
Agreed interesting question, to add some flavor to the boosters, I think “national security” proponents is another way to categorize them.
Personally I cannot say much to this, but vs there being no answer/comment, here’s my naive suggestion:
If you haven’t already considered the 80000 hours career guide and their advising, that seems like a good resource for someone in your position.
https://msutoday.msu.edu/news/2024/new-msu-study-fewer-people-want-to-stand-out-in-public
Could be a useful stat to cite in reference to facts about reality which could be used to support hypotheses that life isn’t actually getting better.
“The study looked at three dimensions of uniqueness: concern about other people’s reactions, desire to break the rules and the willingness to defend beliefs publicly. All three facets declined but the most dramatic were people being hesitant to defend their beliefs publicly (a 6.52% decline) and becoming more concerned with what people think about them (a 4.28% decline).”
Zvi on: A Playbook for AI Policy at the Manhattan Institute
From, https://www.noahpinion.blog/p/at-least-five-interesting-things-68b, I’m looking at this as descriptive not prescriptive, interesting to see other’s uncover ineffective altruism.
San Francisco nonprofits are both corrupt and ineffectual In 2022, there was a big tech bust that deprived the city of San Francisco, and the state of California in general, of a lot of tax revenue. At that point, San Francisco began taking a harder look at the nonprofits to which it has famously outsourced many of its social services. Unsurprisingly, many cases of incompetence and outright corruption are being discovered.
The SF Standard is the best news outlet reporting on this ongoing series of investigations. For example, here’s a story about the Dream Keeper Initiative, a San Francisco city effort to help its rapidly dwindling Black population:
In February 2021 [San Francisco announced] the Dream Keeper Initiative, a landmark piece of legislation that promised to redirect $120 million to address issues caused by systemic racism…Over the last two and a half years, Dream Keeper’s successes have led to more than 1,300 people getting jobs or business training…Investments have been made in healthcare for Black mothers and infants…There’s also the Downpayment Assistance Loan Program, which has distributed more than $24 million to help 57 people purchase homes, a path to upward economic mobility…But despite the many good deeds of Dream Keeper, the initiative has become a bookkeeper’s nightmare…
SF Black Wall Street, a nonprofit that advocates for Black entrepreneurship, has received more than $2.3 million through Dream Keeper…But nearly a third of that money was spent on just two Juneteenth parties…that cost more than $700,000 to produce. That is more than the total amount ($660,000) SF Black Wall Street has spent on small business grants…
A closer inspection of IRS filings for SF Black Wall Street shows that one co-director, Tinisch Hollins, did not report taking a salary while doing 20 hours of work a week. Instead, she redirected tens of thousands of dollars in administrative fees to Ujima Global Consultancy, an LLC she created…[Dream Keeper Director Sheryl] Davis said she was not aware that directors of some of Dream Keeper’s nonprofit partners have set up shell companies, or that some directors have been giving themselves substantial raises — in the tens of thousands — immediately after the infusion of millions in city funds…
Dr. April Silas, CEO of the Homeless Children’s Network, which has received more than $3.7 million from the Office of Economic and Workforce Development, saw her salary increase from almost $232,000 in 2020 to more than $283,000 two years later…Davis confirmed that the Homeless Children’s Network was recently cut off from city funding…
One recipient of Dream Keeper money, J&J Community Resource Center, made headlines after its director tried to get booze and cigars reimbursed.
And here’s a story about SF Safe, a nonprofit that works with the police:
The fired former executive director of a San Francisco nonprofit has been arrested and charged with 34 felonies related to the misuse of more than $700,000 in public funds…Kyra Worthy, 49, of Richmond faces charges that include misappropriation of public money, submitting fraudulent invoices, theft, wage theft and check fraud during her tenure as head of SF SAFE, a nonprofit that partnered with the San Francisco Police Department…Worthy is accused of failing to pay more than $500,000 to subgrantees of a city contract, embezzling more than $100,000 from SF SAFE for personal use and committing wage theft against employees…
Prosecutors allege that Worthy’s mismanagement led to the 48-year-old charity ceasing operations in January, despite receiving millions in public and private funds over five years…Worthy allegedly spent lavishly on parties and events, even as the nonprofit struggled financially. Prosecutors say she spent more than $350,000 on luxury gift boxes in 2022-23 and nearly $100,000 on a single event called “Candy Explosion” in October 2023…In 2018, she allegedly paid her landlord $8,000 using three nonprofit cashier’s checks, telling accountants the funds were for community events…Court documents state Worthy spent more than $90,000 of nonprofit money in 2019 and 2020 on a home healthcare worker for her parents in North Carolina. She reportedly created vague invoices and categorized these payments as community meeting expenses and a District 10 safety project…
Prosecutors further allege Worthy stopped paying payroll taxes for 27 employees from September 2023 to January, when SF SAFE shut down. Court documents say she continued issuing regular paychecks, leading employees to believe taxes were being paid…The alleged wage theft totaled about $80,000 over four months. Worthy is accused of falsely claiming that full wages and taxes were paid when submitting invoices for a city contract…A holiday party, which was not a fundraiser, allegedly cost $56,000.
The practice of farming city services out to nonprofits is simultaneously a way of wasting giant amounts of money, and an invitation to corruption. The SF city government should be building up its state capacity to provide these services in-house instead of lobbing money at nonprofits.
That said, there also may be government corruption going on here. Sheryl Davis, the city employee who runs the Dream Keeper initiative, has also been accused of misuse of funds:
Allegations made against Davis in the whistleblower complaint range from using city funds to pay for trips for family and friends to designing contracts to be less than $10,000 to avoid formal reviews.
Records obtained from the city show that Davis [herself] charged the city for more than $51,000 in reimbursements from 2020 through the first half of 2023. That total was almost $22,000 more than the second-highest department head…Davis wrote a children’s book last year that netted her more than $10,000 in outside income. She acknowledged that the city did make one “bulk” purchase of the book but said she was unsure of how many copies.
So SF clearly has some cultural problems here. The nonprofit problem is important, but overall what’s needed is a crackdown on people who see the city government’s budget as their personal piggy bank.
Agreed—I haven’t looked very closely here either, but eg “Fox, Dominion reach $787M settlement over election claims” seems like a robust signal. https://apnews.com/article/fox-news-dominion-lawsuit-trial-trump-2020-0ac71f75acfacc52ea80b3e747fb0afe
Another more balanced piece I appreciated: https://open.substack.com/pub/noahpinion/p/why-trump-or-harris-might-fail-to?r=128a6t&utm_medium=ios
And FWIW I’d be happy to discover all of the above list is also kinda unsubstantiated. May check them out independently if I have time.
I appreciate you writing all of this up, thank you.
One byproduct of having reasonable people argue for the potential upsides of both candidates, is that it leads me to somewhat not just think, “oh, we’re absolutely doomed if X person becomes president.” Rather, that there may be reasonable arguments that both candidates could handle some area decently well, and I’m ever conscious, or trying to be, of negativity bias in the news. Maybe leaves some room for the optimistic observers or something.
Saying that, I will note this list from a comment on the original thread by Samuel Hammond, as something I’m not sure is addressed above re Jan 6th but does put me more on the side of “Trump as a threat to democracy side”. But I also haven’t independently evaluated these claims either. Just thought I’d contribute what I can in the moment:
“Trump’s threat to democracy and his overt willingness to break laws in order to hold on to power is NOT just about Jan 6. It is not n=1. There are multiple shocking and blatant examples of Trump’s attempts to undermine the election prior to the day of Jan 6. This includes use of false slate of electors (people illegally purported to be elected electors) to go to state governments to discount the vote of millions of Americans; the orders to members of his DOJ to send letters to states lying about corruption in the elections to have them overturned (in which DOJ officials en masse threatened to resign if Trump placed Clark as AG to do so); he called Raffensperger (Georgia’s SOS) as a private citizen for him to search for the exact number of votes he needed to win and threatened legal action; on the day of Jan 6, he told his VP to illegally throw out 81 million votes; and countless of many other examples.
Why is Pence not his running mate? How come many of the people he worked with before are not a part of his campaign? Because he made it clear that his 2nd term will have nothing but yes-men, and it is very unlikely they would have the courage like Pence or Jeffrey Rosen. Vance already said that Pence should have listened to Trump and throw out the votes. The “guardrails” that stopped him the first time wouldn’t be there.”
IDK if this actually works since I only just signed up, but, the “Join us” button in top right leads to, “https://sentinel-team.org/contact/″
Seems you can add yourself to mailing list from there.
Reading and engaging with the forum as good for a meta reason, engaging and encouraging other people to keep making posts because engagement seems to exist and they’re incentivized to post. Or even more junior people to try and contribute, idk what the ea forum felt like ~10 years ago, but probably lower standards for engagement.
I think this is a good instance of focusing through cause areas and one I had in mind
I empathize with the desire for the request which is why I’m responding, but yeah, unsure the EA forum is the right place for the presidential election.
I wonder if one were to make an argument for a candidate strictly across causes which are more EA consensus/funded by Open Phil. X candidate is good for animal welfare, global health and development, and pandemic and AI catastrophic/existential risk. And here are the policies and here is the total analysis across these which differentially directs this many GiveDirectly rated dollars/QALYs.
But yeah, seems hard. Also open to just being wrong here.
Thank you for making the effort to write this post.
Reading Situational Awareness, I updated pretty hardcore into national security as the probable most successful future path, and now find myself a little chastened by your piece, haha [and just went around looking at other responses too, but yours was first and I think it’s the most lit/evidence-based]. I think I bought into the “Other” argument for China and authoritarianism, and the ideal scenario of being ahead in a short timeline world so that you don’t have to even concern yourself with difficult coordination, or even war, if it happens fast enough.
I appreciated learning about macrosecuritization and Sears’ thesis, if I’m a good scholar I should also look into Sears’ historical case studies of national securitization being inferior to macrosecuritization.
Other notes for me from your article included: Leopold’s pretty bad handwaviness around pausing as simply, “not the way”, his unwillingness to engage with alternative paths, the danger (and his benefit) of his narrative dominating, and national security actually being more at risk in the scenario where someone is threatening to escape mutually assured destruction. I appreciated the note that safety researchers were pushed out of/disincentivized in the Manhattan Project early and later disempowered further, and that a national security program would probably perpetuate itself even with a lead.
FWIW I think Leopold also comes to the table with a different background and set of assumptions, and I’m confused about this but charitably: I think he does genuinely believe China is the bigger threat versus the intelligence explosion, I don’t think he intentionally frames the Other as China to diminish macrosecuritization in the face of AI risk. See next note for more, but yes, again, I agree his piece doesn’t have good epistemics when it comes to exploring alternatives, like a pause, and he seems to be doing his darnedest narratively to say the path he describes is The Way (even capitalizing words like this), but...
One additional aspect of Leopold’s beliefs that I don’t believe is present in your current version of this piece, is that Leopold makes a pretty explicit claim that alignment is solvable and furthermore believes that it could be solved in a matter of months, from p. 101 of Situational Awareness:
Moreover, even if the US squeaks out ahead in the end, the difference between a 1-2 year and 1-2 month lead will really matter for navigating the perils of superintelligence. A 1-2 year lead means at least a reasonable margin to get safety right, and to navigate the extremely volatile period around the intelligence explosion and post-superintelligence.77 [NOTE] 77 E.g., space to take an extra 6 months during the intelligence explosion for alignment research to make sure superintelligence doesn’t go awry, time to stabilize the situation after the invention of some novel WMDs by directing these systems to focus on defensive applications, or simply time for human decision-makers to make the right decisions given an extraordinarily rapid pace of technological change with the advent of superintelligence.
I think this is genuinely a crux he has with the ‘doomers’, and to a lesser extent the AI safety community in general. He seems highly confident that AI risk is solvable (and will benefit from gov coordination), contingent on there being enough of a lead (which requires us to go faster to produce that lead) and good security (again, increase the lead).
Finally, I’m sympathetic to Leopold writing about the government as better than corporations to be in charge here (and I think the current rate of AI scaling makes this at some point likely (hit proto-natsec level capability before x-risk capability, maybe this plays out on the model gen release schedule)) and his emphasis on security itself seems pretty robustly good (I can thank him for introducing me to the idea of North Korea walking away with AGI weights). Also just the writing is pretty excellent.
Noting another recent post doing this: https://forum.effectivealtruism.org/posts/RbCnvWyoiDFQccccj/on-the-dwarkesh-chollet-podcast-and-the-cruxes-of-scaling-to
A quick OpenAI-o1 preview BOTEC for additional emissions from a sort of Leopold scenario ~2030, assuming energy is mostly provided by natural gas, since I was kinda curious. Not much time spent on this and took the results at face value. I (of course?) buy that emissions don’t matter in short term, in a world where R&D is increasingly automated and scaled.
Phib: Say an additional 20% of US electricity was added to our power usage (e.g. for AI) over the next 6 years, and it was mostly natural gas. Also, that AI inference is used at an increasing rate, say 100Million H100s equivalent, increasing from today’s level to 100M over the next 6 years, and it’s more or less the same as today’s energy breakdown, although with more natural gas over time.
What do the emissions from such a build out of natural gas look like?
[Reasoning available here: https://chatgpt.com/share/672ed183-7c3c-800d-ab0e-d08020d072c7]
**Answer:
The emissions from this build-out of natural gas over the next six years would be significant. Specifically:
Total Additional Emissions Over Six Years: Approximately 1.136 billion metric tons of CO₂.
Average Annual Emissions Increase: About 189 million metric tons of CO₂ per year.
Percentage of Current U.S. Emissions: This represents an approximate 2.9% increase over the current annual U.S. greenhouse gas emissions.
These emissions result from the increased natural gas usage required to meet the additional electricity demand—both from the general 20% increase in electricity consumption and the substantial growth in AI inference workloads powered by the equivalent of 100 million NVIDIA H100 GPUs.
Phib: Imagine there is a similar buildout in China, if not larger, how do the emissions from all of the above (within some confidence interval) place us environmentally within 2 years after 2030 of such a buildout and increase in emissions? Within 10 years? Considering a more or less constant rate of emissions thereafter for each.
Conclusion The combined emissions from the U.S. and China’s buildouts would significantly impact the global carbon budget and environmental goals:
By 2032 (Two Years After 2030):
Additional Emissions: Approximately 2.44 billion metric tons of CO₂. Impact: Uses about 0.42% of the remaining carbon budget for limiting warming to 1.5°C. By 2040 (Ten Years After 2030):
Additional Emissions: Approximately 12.21 billion metric tons of CO₂. Impact: Consumes about 2.1% of the carbon budget. Cumulative Impact by 2040:
Total Additional Emissions: Around 19.54 billion metric tons of CO₂. Percentage of Carbon Budget: Approximately 3.37%.
Environmental Implications:
Increased Difficulty in Meeting Climate Targets: The additional emissions make it more challenging to stay within the carbon budgets necessary to limit global warming to 1.5°C or even 2°C. Accelerated Climate Change Effects: Higher emissions could lead to more severe climate impacts, such as extreme weather events, sea-level rise, and ecosystem disruptions.
Final Note:
These estimates are subject to uncertainties due to potential changes in technology, policy, and economic factors. Proactive measures, such as investing in renewable energy and improving energy efficiency, could mitigate some of these emissions. However, without significant efforts to reduce reliance on fossil fuels, the environmental impact will be substantial and pose a serious challenge to global climate goals.
Final Phib note, perhaps inference energy costs may be far greater than assumed above, I don’t imagine a GPT-5, GPT-6, that justify further investment, not also being adopted by a much larger population proportion (maybe 1 billion, 2 billion, instead of 100 million).