Philip Gubbins - Got into EA at Vanderbilt University because of some close friends, did some contract work for a few EA orgs. Now working at a bootstrapped for-profit (CEO is an EA), AE Studio is trying to solve alignment and differentially develop technology+neurotechnology (and I want to support and learn as much as I can!)
phgubbins
AE Studio @ SXSW: We need more AI consciousness research (and further resources)
Hi, I’m off the project now, but to my knowledge it is still hibernating (unless otherwise announced I think it will be, and I believe such an effort would be contingent on a serious matching funds opportunity).
EA Giving Tuesday Hibernation
Marginal Reducetarianism
Liked the post, has likely shifted me moreso toward less diversification and hedging my altruistic bets.
Regarding the title, upon first reading it I did a double take that this post might be about diversity in EA! I could see it currently being a bit more ambiguous than something like “against philanthropic diversification”. Though I also think that this is my personal context and might be silly (my context understanding diversity as social as opposed to finance).
Here’s the most Dr. Seuss version that we are putting together for publication: https://www.thecashthatyoustash.com/
Hi! AE Studio is working on some (Dr. Seuss-inspired) children’s books to also share EA and EA-adjacent ideas, here’s one we’ve been working on (happy to receive feedback too):
https://www.yourlivingandgiving.com/
We are currently in steps to publish our first book like this, one on personal finance, and we have written the script for another inspired by Peter Singer’s pond illustration.
I have recently been learning how to publish (through IngramSpark) and may be able to help a little bit with that sort of stuff too if you’d like! Would generally be excited to connect!
Just to further update, the limit of one recurring donation was brought back down to $100 recently, as I know Will is already aware.
Seems like it could be a case of trying to maintain some sort of standard of high fidelity with EA ideas? Avoid dilution in the community and of the term by not too eagerly labeling ideas as “EA”.
I had never considered the first point regarding a local maximum—interesting thing to explore but I’m unsure, except perhaps in a more ideal world, that we are at all capable of consistently getting more than local maxes at times (and yeah dogs seem to be one of the best (easiest) one-time actions someone can take for their happiness (https://jamesclear.com/how-to-automate-a-habit), author surveys his own audience and they produce this tidbit and it matches my intuition).
And this sort of strikes me of my impression of dog-free (or pet-free?) as a movement overall—I recall a friend discussing it with me as a potential ongoing moral catastrophe that people in the future would be horrified with—which I agree with (particularly with the pug example (I can imagine this being extrapolated to all dogs somewhat perhaps), as you said!) but I feel quite horrified by a lot of more horrifying things now than this specific cause area (others with way more scale). It feels like a step for later moral progress, somewhat along the lines of the discounting argument: “people are starving now, why pursue better lives for animals before them.” (I don’t really subscribe to this argument).
I think the idea of dogs replacing children is really interesting and I will definitely think about that a bit more in the future!
Thanks for sharing.
Recently I was looking around EA organizations and I thought it might be useful to have a visualization of this database compiled by Michel Justen. This visualization was rushed out as part of a hackathon with AE Studio and with the help of Jean Mayer, a dev there.
https://ae.studio/ea/organizations
This is pretty rudimentary and feedback is more than welcome, especially regarding how I might best compile some of the below data to include in a future version in an actual post.
Organization size by workers (as represented by bubble size)
Funding per the organization (as represented by bubble size) (and also per cause area)
Potentially provide data over time
Perhaps a short blurb to further specify how orgs within a cause area differ from each other
Other info?
Also, I think it could certainly look better by (spending more time on the visualization looking nice) having the cause areas be better truncated regarding some orgs with a lot of cause areas.
This could provide a cool visualization of the comparative efforts within our community by cause area by ‘bubble size’, and help people understand a bit more about EA organizations and what it means to be ‘EA’.
What would it look like for an organization or company to become more recognized as an ‘EA’ org/company? What might be good ways to become more integrated with the community (only if it is a good fit, that is, with high fidelity) and what does it mean to be more ‘EA’ in this manner?
I recognize that there is a lot of uncertainty/fuzziness with trying to definitively identify entities as ‘EA’. It is hard for me to even know to whom to ask this question, so this comment is one of a few leads I have started.
I am generally curious about the organizational/leadership structure of “EA” as a movement. I am hesitant to detail/name the company as that feels like advertising (even though I do not actually represent the company), but some details without context:
Part of its efforts and investiture are aligned with reducing the risk of a potential x-risk (factor?) - aiming to develop brain-computer interfaces (BCI) that increase rather than hinder human agency.
Aims to use BCI to improve decision-making.
Donates 5% to effective charities (pulled from ~GiveWell) and engages employees in a ‘giving game’ to this end.
A for-profit company without external investors—a criterion they believe is necessary to be longterm focused on prioritizing human agency.
I wonder if, on forms requiring someone to fill out, “How are you engaged with EA?”
With answers like:
“Accepted a job/changed career path due to EA”, “Changed college studies...”, “Committed to donating x% income/year”, “Have gone to an EAG”, “Engage with EA Forum/rationalist blogs”, etc.
Would it make sense to include “Changed diet due to EA considerations”? (or perhaps ‘my diet is in line with EA considerations for animal welfare’? Though I doubt EA really prescribes a certain diet… so perhaps here is a clue as to why it’s not included.)
I just recall filling out a form with some org that had these options and not the last one, and I was surprised that animal welfare was not represented/something that I try to do and attribute to EA was not represented. Especially since I think, as a behavior, it could also be a decent proxy for someone’s engagement (perhaps investment/sacrifice?) with EA.
So, I suppose not much of a call to action considering I cannot even name where I encountered this, more a comment on the breeze about how animal welfare feels waylaid to me. But if someone also has further insight into my predicament I’d appreciate the help.
Cross-commenting from lesswrong for future reference:
I had an opportunity to ask an individual from one of the mentioned labs about plans to use external evaluators and they said something along the lines of:
“External evaluators are very slow—we are just far better at eliciting capabilities from our models.”
They earlier said something much to the same effect when I asked if they’d been surprised by anything people had used deployed LLMs for so far, ‘in the wild’. Essentially, no, not really, maybe even a bit underwhelmed.