I have received funding from the LTFF and the SFF and am also doing work for an EA-adjacent organization.
My EA journey started in 2007 as I considered switching from a Wall Street career to instead help tackle climate change by making wind energy cheaper – unfortunately, the University of Pennsylvania did not have an EA chapter back then! A few years later, I started having doubts about my decision that climate change was the best use of my time. After reading a few books on philosophy and psychology, I decided that moral circle expansion was neglected but important and donated a few thousand sterling pounds of my modest income to a somewhat evidence-based organisation. Serendipitously, my boss stumbled upon EA in a thread on Stack Exchange around 2014 and sent me a link. After reading up on EA, I then pursued E2G with my modest income, donating ~USD35k to AMF. I have done some limited volunteering for building the EA community here in Stockholm, Sweden. Additionally, I set up and was an admin of the ~1k member EA system change Facebook group (apologies for not having time to make more of it!). Lastly, (and I am leaving out a lot of smaller stuff like giving career guidance, etc.) I have coordinated with other people interested in doing EA community building in UWC high schools and have even run a couple of EA events at these schools.
No, my comments are completely novice and naïve. I think I am just baffled that all of the funding of AI Safety is done by individuals who will profit massively from accelerating AI. Or, I think what baffles me most is how little focus there is on this peculiar combination of incentives—I listen to a few AI podcasts and browse the forum now and then—why am I only hearing about it now after a couple of years? Not sure what to think of it—my main feeling is just that the relative silence about this is somehow strange, especially in an environment that places importance on epistemics and biases.