As someone who has curated and shared my favourite essays by Holden Karnofsky, Tyler Cowen, Scott Sumner, and Joseph Heath
Hi there, would you be willing to share a link to these collections? I’d be interested in checking them out. :)
As someone who has curated and shared my favourite essays by Holden Karnofsky, Tyler Cowen, Scott Sumner, and Joseph Heath
Hi there, would you be willing to share a link to these collections? I’d be interested in checking them out. :)
TLDR:
CAPM-certified project manager, software developer, and process improvement engineer obsessed with building systems, processes, and tools to help teams and organizations thrive. Keen on landing a role in a mid-size startup (100-1000 people) where I can serve as a bridge between technical and non-technical stakeholders within a high-performing, high-integrity team.
Skills & background:
As a technical generalist who excels in dynamic and innovative environments, I love blending information technology, human-centered design, systems thinking, and compassionate leadership to craft and lead change initiatives that produce lasting positive outcomes. My career journey includes a rich array of roles spanning mid-size startups, non-profits, and small companies, where I’ve:
Led teams of 8 to 31 members, orchestrated major business system migrations, and collaboratively developed systems and processes to foster cross-functional alignment and drive collective progress.
Spreadheaded department-wide process improvement projects, developed custom software to streamline manufacturing processes, and leveraged data analysis and visualization tools to elucidate and solve business challenges.
Guided product initiatives within a 1,800+ Slack community, spearheaded engaging events and workshops to connect like-minded professionals, and automated complex workflows to optimize organizational efficiency.
In the EA sphere, I have served as a community organizer of EA Philadelphia for the past two years and briefly worked as a contractor for Impactful Animal Advocacy in a product management / tech administration capacity.
Location/remote:
Philadelphia, Pennsylvania, USA (also open to remote)
Availability & type of work:
Full-time, contract. Available to start immediately.
Resume/CV/LinkedIn:
https://quinnmchugh.net/resume
https://linkedin.com/in/quinnpmchugh/
Email/contact:
qpmchugh@gmail.com
https://cal.com/quinnm
Other notes:
Cause agnostic, but especially interested in institutional decision-making, animal advocacy, and meta-EA.
My work is characterized by the following 3 pillars:
Pluralistic approach—I believe innovation is the offspring of diverse viewpoints. I embrace a collaborative, multifaceted approach to problem-solving and continually seek out insights from a variety of domains to incorporate into my work.
Compassionate leadership—I strive to cultivate high-performing, high-integrity teams where every member feels valued and motivated to do their best work. I care deeply about the impact of information flows, infrastructure, and incentives on organizational effectiveness, team synergy, and individual empowerment, and strive to create environments that promote mutual growth and foster collaborative excellence.
Impact-driven mindset—For me, success is measured by the positive change we create. I prioritize work that is high-leverage, delivering substantial, positive change with minimal cost.
Great list, Kyle! Thanks for sharing. :)
I wasn’t aware of The Life You Can Save’s Helping Women & Girls Fund until I read your post. It’s wonderful to know something like this exists.
Hi Rakafet,
Welcome to the EA Forum!
I never knew the Abstinence Violation Effect had a name—I think that’s something I’ll have to add to my lexicon. :)
While reading through your post, I was having a bit of trouble understanding your arguments and, the evidence behind why you think this intervention is particularly important and neglected.
If I’m understanding correctly, your argument is:
More funding should be directed towards providing vegan food to soldiers, who experience difficulties maintaining a vegan diet. By providing this support, we could reduce the likelihood of soldiers falling victim to the Abstinence Violent Affect and abandoning their vegan diet altogether, which could affect ~4000 animals over the span of a given soldier’s life.
Would you say that’s accurate?
Hi David,
Memex.garden seems very similar to what you’re describing.
I recently came across this great introductory talk from the Center for Humane Technology, discussing the less catastrophic, but still significant risks of generative large language models (LLMs). This might be a valuable resource to share with those unfamiliar with the staggering pace of AI capabilities research.
A key insight for me: Generative LLMs have the capacity to interpret an astonishing variety of languages. Whether those languages are traditional (e.g. written or verbal English) or abstract (e.g. images, electrical signals in the brain, wifi traffic, etc) doesn’t necessarily matter. What matters is the events in that language can be quantified and measured.
While this opens up the door to numerous fascinating applications (e.g. translating animal vocalizations to human language, enabling blind individuals to see), it also raises some serious concerns regarding privacy of thought, mass surveillance, and further erosion of truth, among others.
https://www.quinnmchugh.net/blog-micro/llms-as-translators
@Felix Wolf Thanks for taking the time to explain, Felix. This makes sense now.
This is fantastic! Props to the Type 3 Audio and EA Forum team.
Quick question regarding accessibility:
I’m aware EA Forum posts can both:
Customize the alt text of an image
Provide captions to an image
Is this information included in these audio narrations?
Only accounts with at least 100 Karma on forum.effectivealtruism.org or lesswrong.com are allowed to vote.
I’m a little confused by this. What’s the motivation for using a karma threshold to decide who does and doesn’t get to vote?
Hi Harrison,
Love this idea—thanks for taking the time to write this.
I think idea sharing, discussion, and coordination for that matter among community organizers could be so much better than it is today. Discussion boards and messaging platforms are limited in that you have to sift through the entire discussion before being able to compare and contrast the perspectives shared. These kinds of discussions also seem very ad-hoc and the value they generate diminishes over time as fewer and fewer people end up finding and engaging with the information.
In general, I think there’s a huge opportunity to leverage next-generation web technology to map arguments, cause areas, and other forms of knowledge such that it can be transferred more easily between people and organizations across space and time. This is what organizations like OpenGlobalMind are striving to do.
As Babel mentioned though, I think the biggest challenge is adoption. Community organizers tend to operate in different timezones and contexts. I think you would need at least a couple of high-profile people in the community on board in order to establish this as a norm, unless this technology were to be integrated directly into the forum (which frankly, would be fantastic!)
Finally, are you aware of DebateGraph? It might also peak your interest.
Hey David, you might already be aware, but Vaidehi Agarwalla has recently spearheaded a project to migrate numerous EA Slacks, Discords, etc into the EA Anywhere Slack.
Edit: I see you’re one of the editors! I’ll keep this comment up for others to reference.
Hey @Alex Barnes,
As a follow up to this: the open-source community has since released Ferdium, a free desktop app that makes it much easier to view your services all in one place.
Hi EAlly,
It seems like there are numerous questions to unpack here. If I’m understanding you correctly, it seems like you’re generally curious about how others have sought to increase their impact through an EA lense, given a background in IT. Is that right?
If so, I think your questions might be better answered by searching for, reaching out to, and scheduling informational interviews with people working at the intersection of EA and IT. I previously came across a helpful framework for doing this sort of thing here: [Webinar] The 2-Hour Job Search—YouTube
From one generalist IT person to another, would it be helpful to hop on a call to discuss your uncertainties? https://calend.ly/quinnpmchugh/meet
While I may not have a lot to offer in terms of career guidance, I can certainly relate to your position. My background is in mechanical engineering, but I currently do a mix of IT, operations, project management, and software engineering work. Professionally, I am interested in moving into project management full-time, but am also very interested in leveraging my IT skills to improve the movement’s overall coordination and intellectual diversity through projects like EA Explorer.
Based on this realization, am I right to assume you are no longer interested in coordinating this book club?
I came to the same realization after discussing it with a few members of our local group, but given my interests in animal advocacy, I think it’d be personally valuable for me to engage with.
Thank you for pointing this out Lorenzo—I’ve removed the link from my comment.
Thanks for setting this up @Kaleem! I hadn’t heard of this book until a colleague of mine mentioned your post in our local group’s (EA Philadelphia’s) Discord.
Edit: Link to book PDF removed
This course seems valuable. Thanks for sharing!
I could see this being an inspiring resource for EAs who struggle with imposter syndrome (myself included), especially when paired with posts like seven ways to become unstoppable agentic.
Thanks for taking the time to run and analysis this survey.
Are there plans to include questions about income and/or financial stability in next year’s survey?
Rationale: I believe this data would be valuable in providing individuals with a clearer understanding of the financial security of others in the EA community and could help newcomers assess whether the advice they receive is relevant to their own financial situation. Many recommendations and norms within EA—such as unconventional career choices, significant donation pledges, or risk-taking in pursuit of impact—can have vastly different implications depending on who’s making the recommendation or reinforcing the norms.
If a significant portion of the community has financial security, it’s possible that commonly shared advice assumes a level of stability that not all newcomers have. Understanding the financial realities of EA members could help provide more contextually appropriate guidance and ensure that discussions around impact, risk, and career planning are inclusive of people from diverse economic backgrounds.
Would love to hear your thoughts on this!