Hey everyone, I am extremely excited to write my first ever post on the EA forum!
I figure, why not come out swinging and go Meta right off the bat with a post asking which of my post ideas I should create a post about? After all, the meta, the better (said with a British accent,) right?
I only joined the movement three months ago with the University of Southern California group, but have been thinking along EA lines for many years. I have collected a couple dozen post ideas as I’ve been learning about EA in most of my free time over the past few months. I’d love any feedback on which would be most interesting and useful to community members! I’d also appreciate references to work already written on these topics.
If AI inevitably or almost inevitably dominates the future of the universe, then the hard problem of consciousness and how to ensure conscious, happiness-seeking AI may be the most important cause area
Are intelligence, motivation, consciousness, and creativity all orthogonal, especially at the upper limit of each? If not, what does this mean for AI?
An analysis of Impact Investing as an EA cause area
Analysis of the identity of consciousness, i.e. is consciousness instantaneous (as in Buddhism, non-self or emptiness), continuous over a lifetime (similar to the notion of a soul, with your consciousness starting at birth and ending at death), or universal (the exact same consciousness is in every conscious being simultaneously); AND what does this mean for practical ethics and the long-term future of the universe
Could the future state of a democratic, believability weighted public goods market, essentially a futarchy system, be the system which Yudkowsky’s coherent extrapolated volition, an AI alignment mechanism, uses to have AI predict humanity’s ultimate preferences?
Why I’m more concerned about human alignment than AI alignment; why rapidly accelerating technology will make terrorism an insurmountable existential threat within a relatively short timeframe
Dramatic and easy IQ boost for EAs; evidence suggests creatine boosts IQ by 15 points in vegans. And the importance of vegan supplements generally
Social psychology forces which may cause EAs to be hyper-focused on AI
Massively scalable project-based community building idea
Takeaways from EAGx Boston
Why existntial hope (positive longtermism) may be much more effective at reducing existential risk than trying to reduce existential risk
Speeding up human moral development may be the most effective animal welfare intervention
A series on effective entrepreneurship
My approach to organizational design
Marketing survey on what EA messaging has been most persuasive to comminty members
Why I think broad longtermism is massively underrated
What if we had a perpetual donor contest and entrepreneurial ecosystem rather than just a donor lottery?
The joys of Blinkist (book summary app) for rapid broad learning
Is there a GiveWell for longtermism? There should be.
My current possible trajectories and request for feedback/career advice
How I came to longtermism on my own, and what I think EA longtermism may be getting wrong
Initial thoughts on creating a broad longtermism fellowship
EA dating site idea and prototype
Ultimate Pleasure Machine Dilemma: If you had the opportunity to press a button that turns the entire universe into a perpetual motion pleasure machine, which eternally forces the entire universe into a state of maximum happiness (however you define that), would you press it? (This one was inspired by USC EA Strad Slater)
Feel free to just comment the number or numbers you think is most effective, or to argue why you think so. Really appreciate your feedback, thanks everyone!
Which Post Idea Is Most Effective?
Hey everyone, I am extremely excited to write my first ever post on the EA forum!
I figure, why not come out swinging and go Meta right off the bat with a post asking which of my post ideas I should create a post about? After all, the meta, the better (said with a British accent,) right?
I only joined the movement three months ago with the University of Southern California group, but have been thinking along EA lines for many years. I have collected a couple dozen post ideas as I’ve been learning about EA in most of my free time over the past few months. I’d love any feedback on which would be most interesting and useful to community members! I’d also appreciate references to work already written on these topics.
If AI inevitably or almost inevitably dominates the future of the universe, then the hard problem of consciousness and how to ensure conscious, happiness-seeking AI may be the most important cause area
Are intelligence, motivation, consciousness, and creativity all orthogonal, especially at the upper limit of each? If not, what does this mean for AI?
An analysis of Impact Investing as an EA cause area
Analysis of the identity of consciousness, i.e. is consciousness instantaneous (as in Buddhism, non-self or emptiness), continuous over a lifetime (similar to the notion of a soul, with your consciousness starting at birth and ending at death), or universal (the exact same consciousness is in every conscious being simultaneously); AND what does this mean for practical ethics and the long-term future of the universe
Could the future state of a democratic, believability weighted public goods market, essentially a futarchy system, be the system which Yudkowsky’s coherent extrapolated volition, an AI alignment mechanism, uses to have AI predict humanity’s ultimate preferences?
Why I’m more concerned about human alignment than AI alignment; why rapidly accelerating technology will make terrorism an insurmountable existential threat within a relatively short timeframe
Dramatic and easy IQ boost for EAs; evidence suggests creatine boosts IQ by 15 points in vegans. And the importance of vegan supplements generally
Social psychology forces which may cause EAs to be hyper-focused on AI
Massively scalable project-based community building idea
Takeaways from EAGx Boston
Why existntial hope (positive longtermism) may be much more effective at reducing existential risk than trying to reduce existential risk
Speeding up human moral development may be the most effective animal welfare intervention
A series on effective entrepreneurship
My approach to organizational design
Marketing survey on what EA messaging has been most persuasive to comminty members
Why I think broad longtermism is massively underrated
What if we had a perpetual donor contest and entrepreneurial ecosystem rather than just a donor lottery?
The joys of Blinkist (book summary app) for rapid broad learning
Is there a GiveWell for longtermism? There should be.
My current possible trajectories and request for feedback/career advice
How I came to longtermism on my own, and what I think EA longtermism may be getting wrong
Initial thoughts on creating a broad longtermism fellowship
EA dating site idea and prototype
Ultimate Pleasure Machine Dilemma: If you had the opportunity to press a button that turns the entire universe into a perpetual motion pleasure machine, which eternally forces the entire universe into a state of maximum happiness (however you define that), would you press it? (This one was inspired by USC EA Strad Slater)
Feel free to just comment the number or numbers you think is most effective, or to argue why you think so. Really appreciate your feedback, thanks everyone!