Anecdotally, in descending order of efficacy: Phone call to representative from their voter > email to representative from their voter > petition.
nexech
In Bostromian AI Safety people often talk about human-level intelligence, defined roughly as more efficient mental performance than humans on all tasks humans care about. Has anyone ever tried to sketch out some subsets of human abilities that would still be sufficient to make a software system highly disruptive? This could develop into a stronger and more specific claim than Bostrom’s. For example, perhaps a system with ‘just’ a superhuman ability to observe and model economics, or to find new optimization algorithms (etc) could be almost as concerning to a more vaguely defined ‘human-level intelligence’ AGI.
I also think there is some demand for less-formal discussion here (such as in Open Threads, like you say) because of the current somewhat-low population at lesswrong.com.
I agree that a subreddit could fill this niche, however i am not aware of any currently existing subreddit for this.
Curiously, the only typo i see is in the line about typos at the end.
Do you know of any resources discussing the pros and cons of the introduction of new government agencies?
An idea worth discussing, regardless.