Hello,
I founded a nonprofit org to uplift teens in-crisis. We work with youths on-probation, formerly incarcerated, in substance abuse treatment, and in the foster system.
The BOOM youth mentoring program exists to propel disadvantaged teens into futures of achievement, excellence, and prosperity. Our mission is to plant seeds for entrepreneurship and hardware engineering.
Winner of Protolabs Cool Idea design award, The BOOM teaches teen apprentices who have limited economic resources to fabricate and market handmade electronic hardware.
The BOOM has been featured in Electronic Engineering Journal, Make Magazine, and other journals. Partners include Engineers Without Borders. We completed an Autodesk Residency.
Our goal is replicate our program in marginalized communities around the world.
We seek experienced electronics hardware engineers to donate a couple of hours per week. Is the EA community an appropriate path for us?
The EA community might be an appropriate community if you, but it is hard to say this with any level of confidence since I know so little about your projects, goals, impacts, motives, etc.
A good question to ask yourself: what if you had some evidence showing that your project was having a negligible influence on the result that you wanted? Or even worse, what if it showed that your project had a negative impact? If you would change your behavior based on these facts, then that is an indicator that EA might be a good fit.
Another thing to ask yourself: if your goal is to give people more economic opportunities, is training teens in California the best way to do that? If you were instead to train teens in Dakar, or New Delhi, or Mexico City, would that get more “bang for your buck?” Or maybe if you were to fund school uniforms for young girls in rural [insert country here], for each dollar spent would that generate more in lifetime earnings than your electrical engineering training program. These are the kinds of questions that an EA might ask himself/herself about a project like yours.
Our project is based on finding the optimum solution to a problem.
However, we reject the notion of data as god. Data is only as good as assumptions, collection, and interpretation. A deep knowledge of a particular target community, based on years of experience with the community, may be far more reliable than a particular set of data generated by a questionnaire.
Whether based on hard data, anecdotal data, or personal experience, we are motivated by continual improvement.
We also believe that for maximum effectiveness, any social impact activity must match the skills and passions of the social workers. If data suggested school uniforms were more beneficial than engineering-training, then we’d let someone else handle the uniforms, as our project embodies passion and skills in engineering-training.
We reject the notion of a silver bullet. We recognize that social problems are usually the result of a mix of causes, and require a mix of fixes. By that logic, a mix of engineering-training plus other fixes (say, for example, school uniforms) will have greater impact than one fix alone. Therefor, it’s logically justified to continue our work with engineering-training, and let others handle other fixes (such as, for example, school uniforms).
“is training teens in California the best way to do that? If you were instead to train teens in Dakar, or New Delhi, or Mexico City, would that get more “bang for your buck?”
-- Our delivery model is designed for a global presence. Our pilot program is in California, but we are developing a model that can scale out to marginalized communities all over the world at minimal cost. Our model can reach teens in California, Dakar, New Delhi, and Mexico City.
Apparently, my original post received downvotes. I’d like to understand why.
The Intro EA Program might be a good way to get more familiar with some ideas and mental tools/models that are common in EA. Doing Good Better is an introduction to a lot of EA ideas that is fairly easy to read. Scout Mindset would also be a good book to read (less for understanding EA, and more for understanding an approach to the world of figuring out what is true, rather than fighting for what I believe to be true).
If you are in San Francisco (or the greater Bay Area) then it might be feasible for you to meet other EAs in person and get input on how to make your project/effort better.
If you want to adapt some EA-esque practices, then measuring your impact (such as lives saved per 10,000 dollars spent, or years of incarceration prevented per workshop, or job placements achieved per SOMETHING) could be a good start. It is hard to do monitoring and evaluation well, but I’d encourage you to not let the perfect be the enemy of the good. Once you know your impact and input per unit of impact, then you can compare, optimize, pivot, and so on.
Cause neutrality is a fairly important idea in EA. While I don’t think any person is truly and absolutely neutral about causes (we all have some things that resonate with us more, or pet projects that we simply care more about), in my mind the Platonic ideal of an EA would do a pretty good job of setting aside personal biases/connections/preferences and simply do what accomplished the most. I’m certainly not there (I work in HR for crying out loud 😅), but it is an aspirational ideal to strive for.
In general the bar for EA projects is set pretty high. A lot of EAs might look at an electrical engineering training program and think something like:
To measure our impact, we’d have to fully implement our vision.
We created a successful pilot. Now we need to raise funds to fully implement our vision.
“Healthy life”? You mean, access to food and water? Great! That’s essential.
But this silver bullet idea you’re promoting isn’t possible. People need nutrition AND education. Does EA really promote the idea that we have to choose between nutrition AND education?
Most problems have multiple causes, and need multiple solutions. The idea that people should all support one thing is a grave disservice to communities in need. We need people to support a MIX of solutions.
EA sounds anti-innovation. One of the biggest innovation-killers is the inclination of funders to support a handful of large projects. Large projects are complacent and conservative. Only fresh new projects have the courage to innovate.
The idea that people should eliminate their personal biases/connections/preferences is absurd and counter-effective. Effective social impact requires that people on the front lines apply their PASSION and SKILLS. That’s what they’re good at.
It sounds like this isn’t the feedback you’re hoping for, and that sucks, but I think people aren’t sold on your model specifically. Check out Charity Entrepreneurship as an example of a nonprofit incubator for innovative / unusual ideas!
Generally speaking, organizations that tend to do well in EA clear a higher bar of rigor on theory of change. For example, being able to show an ROI comparable to one of GiveWell’s top recommended charities, or have some sort of global multiplier effect (ex: reducing risks of future pandemics.)
Your organization seems off to a great start and will probably continue to thrive in the social impact community—if you’d like to learn more about what EAs tend to care about, take a look at the problem profiles on 80,000 Hours.org.
Our org isn’t “thriving”. It has been exceedingly difficult to obtain major funding.
We focus on Appropriate Technology and School Building, which are listed.
Our model is low-cost infrastructure, to facilitate rollouts in many regions.
I’m sorry, and I really wish you guys the best of luck! It’s super competitive and many great orgs don’t clear the hurdle.
I’m amazed that “Poverty” isn’t listed as one of the most pressing world problems.
https://80000hours.org/problem-profiles/
They do have it as a problem area: https://80000hours.org/topic/causes/global-poverty/
I think they might consider it less neglected and tractable than other interventions, with few opportunities for outsized impact
I don’t know if their “Global Poverty” problem area would qualify my org for support. Do you think so?
We focus on Appropriate Technology and School Building, which are listed.
For transparency, though, I personally focus and donate to organizations closer to what 80,000 Hours is talking about, because I think huge public health threats have an outsized impact on poverty and wellbeing.
“Closer”? You mean, you don’t consider Poverty to be an 80,000 Hours priority?
I’m also surprised to see this—lots and lots of EAs focus on wellbeing / reducing global poverty (see Givewell for helpful summary). Obviously reducing risk of nuclear war, etc, has implications for poverty, but try GiveWell for a more direct focus.