That link sent me here:
Also, I registered for an event on the 15th but I never got any sort of confirmation email, no idea if I’ll be approved in time.
I’ve seen this sort of thing before, it reeks of hacking by a 3rd party.
That link sent me here:
Also, I registered for an event on the 15th but I never got any sort of confirmation email, no idea if I’ll be approved in time.
I’ve seen this sort of thing before, it reeks of hacking by a 3rd party.
I’m the author of more than 100 high-quality entries in the AI Safety Arguments Competition and the writer of this post. I effectively majored in US-China affairs, specifically on the axis of AI and tech policy. I live and breathe this stuff, in a provable way, and I will DM you my email address because I intend to contribute significantly.
But I want to clarify that mass public outreach, by default, comes with profoundly complicated and unpredictable risks, and it’s certainly not the kind of thing that someone can intuitively conclude is a good or bad idea.
This is exactly the kind of domain where very smart and insightful people trip up, in ways that no amount of intelligence could give someone a fair chance of getting right without deliberate guidance from someone with arcane, technical experience in the area. In a best case scenario, it is reinventing the wheel; in the worst case scenario, it gets the entire concept of AI safety burned.
A couple weeks ago, I wrote a really helpful flowchart designed to introduce newcomers to international AI policy, so at the very least they don’t waste a ton of time and energy reinventing the wheel: https://www.lesswrong.com/posts/brFGvPqo8sKpb9mZf/the-basics-of-agi-policy-flowchart
Ultimately, the best strategy is meeting policy folk and talking to them, nothing on the internet can really substitute for that.
Is the brunch outdoors?
You shouldn’t be surprised if there’s a significant uptick in public discourse about both effective altruism and longtermism in August!
Has anyone given significant thought to the possibility that hostility to EA and longtermism is stable and/or endemic in current society? For example, if opposing AI capabilities development (a key national security priority in the US and China) has resulted in agreements to mitigate the risk of negative attitudes about AI from becoming mainstream or spreading to a lot of AI workers, regardless of whether the threat comes from an unexpected place like EA (as opposed to an expected place, like Russian Bots on social media, which may even have already used AI safety concepts to target western AI industry workers in the past).
In that scenario, social media and news outlets will generally remain critical to EA, and staying out of the spotlight and focusing on in-person communications would have been a better option than triggering escalating media criticism of EA. We live in the post-truth era after all, and reality is allowed to do this sort of thing to you.
Want.
Counterpoint: right now, “social and behavioral science” is the least neglected field on earth. It is a multi-trillion-dollar industry, possibly larger than all academia in human history (especially if you discount physics during the 20th century); and rather than merely paying for itself immediately after the research is finished, it pays for itself even earlier. Governments fight wars over it, and ludicrously large corporations fight to monopolize it.
If academia’s first-mover advantage (e.g. “psychology”, “statistics”) was enough to secure a large share of the future of social and behavioral R&D, then academic R&D on social and behavioral science would be fine, and this paper never would have been written.
It’s a very well written paper, and I’ll be saving it because it has good info for my open-source intelligence work. But academic behavioral sciences have stagnated because academia has stagnated, not because behavioral sciences have stagnated.
A lot of people have been hoping for software systems like these, for decades. I have no doubt that EA can oversee/fund extreme improvements on all sorts of existing systems and designs, and that reinventing the wheel might be necessary whenever existing systems are inaccessible and thus must be reinvented.
I wasn’t sold on this at first. But at this point it’s pretty clear that EA has good odds of single-handedly revolutionizing all suicide prevention. The asymmetry of skill and pragmatism is just too steep. I can see EA becoming one of the big fish within a couple years, if people start yielding results quick enough.
Potentially helpful information: full-driverless taxi cabs have recently been fully licensed in Shanghai a few days ago, which should revise our self-driving-car trajectories upwards (in particular, upwards from the problems faced by Cruise taxis in San Francisco last month, since China evidently ignored those concerns, even though that sort of thing was supposed to be taken seriously there).
This seems like a really, really good fit for Open Philanthropy’s work. It’s also exactly the kind of thing that’s needed right now: something that ridiculously large numbers of people need, something that can visibly change the course of civilization and put OpenPhil at the heart of it, and something that only Open Philanthropy can give. Especially now that a lot of public attention is scheduled to be focused on Will’s new book, which means a lot of public attention on AI, which is a really big can of worms.
I think there’s a lot of really good governance/NatSec people affiliated with EA who know a lot about this, and it’s a really good idea to run it by them. Core EA principles have opposed getting entangled with partisan politics for several years now, and as one of those really good governance/NatSec people affiliated with EA, I can definitely say that it’s much more complicated and nefarious than it appears.
But also as one of those really good governance/NatSec people affiliated with EA, I can also say that this has made a very solid case against those ant-political core principles, even if some things were extremely incorrect (e.g. “Democrats have a slim window to do something to save their Republic now, but they’re not taking it because Manchin and Sinema are idiots”).
There’s definitely plenty of extremely-plausible scenarios where EA needs to become much less risk-averse, and focus on high-risk high-reward hail-mary solutions. There’s definitely time constraints to get that foot in the door, no matter how you look at it.
It looks to me like establishing tractability in today’s Science Circumstances means focusing on today’s Science Circumstances. And focusing on today’s Science Circumstances means researching the Infodemic, which means researching misinformation and information-monopolization, which means studying politics and cybersecurity and information-asymmetry in business/microeconomics.
I can definitely see these vetting mechanisms as being tractable, but broader change in this area needs to take a broader world model into account.
I was very impressed by the “Is there already anything alike” section. However, it needs to be bigger, and there’s a lot of people involved in EA who have a ton of experience with regulation who can give tons of really good research suggestions and case studies. The prospects are clearly worth further research.
People might be more impressed with you in a way you find disturbing… I am floored that there are people who want to read a pile of links I have for them or trade contact information so they can ask my advice about their education or careers. It’s kind of a scary amount of influence over and responsibility towards someone to suddenly have.
Bot hypothesis checks out. This should be much more frequent with irl conversations.
I think it’s worthwhile to note that we’re also living in an infodemic; correct information about COVID and its effects are Goodhart’s Law on steroids. Fear of exposure means fear of going to work, which makes or breaks entire economies. That’s a lot of money at stake. I research various anti-inductive areas for a living and I know how this sort of thing works.
As a rule of thumb, P100 masks and eating/breaks outside are currently the best ways to avoid brain injury and continuing to do maximum work as an EA. Vaccination status does not give any clear indication of any increased protection.
By voting, do you mean elections or referenda? If so, I don’t think that would be a good example, it really sells short the entire concept of software systems augmenting collective intelligence.
I’m worried that a lot of people are going to miss out on this point: EA needs to diversify. There are too many clever, creative, and pragmatic people hitting diminishing returns on the same high-impact cause areas. There’s plenty of money, and adding additional workers in the policy arena will have a network effect that benefits everyone. It’s not perfectly optimal but this isn’t an optimal planet.
Thanks to your advice, I found a meetup within a week from today. I was not able to look properly due to the stress of recently arriving in east bay without any of my furniture. Thank you for all the help.