Builds web apps (eg viewpoints.xyz) and makes forecasts. Currently I have spare capacity.
Nathan Young
I made it up[1].
But, as I say in the following sentences it seems plausible to me that without betting markets to keep the numbers accessible and Silver to keep pushing on them, it would have taken longer for the initial crash to become visible, it could have faded from the news and it could have been hard to see that others were gaining momentum.
All of these changes seem to increase the chance of biden staying in, which was pretty knife edgy for a long time.
Can I tweet this? I think it’s a good take
I am happy to see this. Have you messaged people on the EA and epistemics slack?
Here are some epistemics projects I am excited about:
Polymarket and Nate Silver—It looks to me that forecasting was 1 − 5% of the Democrats dropping Biden from their ticket. Being able to rapidly see the drop in his % chance of winning during the debate, holding focus on this poor performance over the following weeks and seeing momentum increase for other candidates all seemed powerful[1].
X Community Notes—that one of the largest social media platforms in the world has a truth seeking process with good incentives is great. For all Musk’s faults, he has pushed this and it is to his credit. I think someone should run a think tank to lobby X and other orgs into even better truth seeking
The Swift Centre—large conflict of interest, since I forecast for them, but as a forecasting consultancy that is managing to stand largely (entirely?) without grant funding, just getting standard business gigs, if I were gonna suggest epistemics consulting, I’d probably recommend us. The Swift Centre is a professional or that has worked with DeepMind and the Open Nuclear Network.
Discourse mapping—Many discussions happen often and we don’t move forward. Personally I’m really excited about trying to find consensus positions to allow focus to be freed for more important stuff. Here is the site my team mocked up for Control AI, but I think we could have similar discourse mapping for SB 1047, different approaches to AI safety
The Forum’s AI Welfare Week—I enjoyed a week of focus on a single topic. I reckon if we did about 10 of these we might really start to get somewhere. Perhaps with clustering on different groups based on their positions on initial spectra.
Sage’s Fatebook.io—a tool for quickly making and tracking forecasts. The only tool I’ve found that I show to non-forecasting business people that they say “oh what’s that, can I use that”. I think Sage should charge for this and try and push it as a standard SaaS product.[2]
And a quick note:
An example of a potential project here: A consultancy which provides organisations support in improving their epistemics.
I think the obvious question here should be “how would you know such a consultancy has good epistemics”.
As a personal note, I’ve been building epistemic tools for years, eg estimaker.app or casting around for forecasting questions to write on. The FTXFF was pretty supportive of this stuff, but since it’s fall I’ve not felt like big EA finds my work particularly interesting or worthy of support. Many of the people I see doing interesting tinkering work like this end up moving to AI Safety.
So you’d say the major shift is:
Towards AI policy work
Towards AI x bio policy work
Also this seems notable:
Many EAs have edited their LinkedIns to purge that two-word phrase and now barely and begrudgingly admit to being “EA-adjacent”.
How is this edit?
[Question] In the last 2 years, what surprising ideas has EA championed or how has the movement changed its mind?
I want to say thanks to people involved in the EA endeavour. I know things can be tough at times, but you didn’t have to care about this stuff, but you do. Thank you, it means a lot to me. Let’s make the world better!
My handwavey argument around this would be something like “it’s very hard to stay caring about the main thing”.
I would like more non-EAs who care about the main thing—maximising doing good—but not just more really talented people. If this is just a cluster of really talented people, I am not sure that we are different to universities or top companies—many of which don’t achieve our aims.
Can the answers from the video be posted as answers to the questions so that those we can see each answer next to its context?
Sorry, my bad. Corrected.
Can I suggest that Jimmy and Mark Rober work on another fundraising campaign to raise a million dollars for GiveDirectly, perhaps after Team Trees and Team Seas you could call it Teem Fees?
Thank you for your work and for Jimmy’s willingness to take hits for giving people resources directly. It feels like outside of EA, helping poorer people isn’t seen as that much more effective than helping people in the US and many philanthropists stick to supporting US people. Thanks to Jimmy for sticking his neck out here, it does feel like it makes a difference to the Discourse.
Currently Beast Philanthropy’s approach is pretty scattershot—mainly focused on helping people directly in a manner that can be turned into videos. This makes sense, given your context.
Is there a plan to scale previous interventions up?
If so, how will you decide which?
I think that being there at the start of a discussion is a great way to shift it. Look at AI safety (for good and ill)
I think given a big enough GPU, yes, it seems plausible to me. Our mids are memory stores and performing calculations. What is missing in terms of a GPU?
I think bacteria are unlikely to be conscious due to a lack of processing power.
Training takes probably 3 years to cycle up and maybe 3 years to happen. When did we start deciding to train people in AI Safety, vs when was there enough?
Seems plausible to me that the AI welfare discussion happens before we might currently be ready.
Would it be wrong to dissect the child?
Here is a different thought experiment. Say that I was told that to find the cure to a disease that would kill 1000s of robot children, I had to either dissect the supposedly non-sentient robot or dissect a different, definitely sentient robot. Which do my intuitions point to here?
Maybe but let’s not overcomplicate things.
How about now https://nathanpmyoung.substack.com/p/forecasting-is-mostly-vibes-so-is