The EA community might be an appropriate community if you, but it is hard to say this with any level of confidence since I know so little about your projects, goals, impacts, motives, etc.
A good question to ask yourself: what if you had some evidence showing that your project was having a negligible influence on the result that you wanted? Or even worse, what if it showed that your project had a negative impact? If you would change your behavior based on these facts, then that is an indicator that EA might be a good fit.
Another thing to ask yourself: if your goal is to give people more economic opportunities, is training teens in California the best way to do that? If you were instead to train teens in Dakar, or New Delhi, or Mexico City, would that get more “bang for your buck?” Or maybe if you were to fund school uniforms for young girls in rural [insert country here], for each dollar spent would that generate more in lifetime earnings than your electrical engineering training program. These are the kinds of questions that an EA might ask himself/herself about a project like yours.
Our project is based on finding the optimum solution to a problem.
However, we reject the notion of data as god. Data is only as good as assumptions, collection, and interpretation. A deep knowledge of a particular target community, based on years of experience with the community, may be far more reliable than a particular set of data generated by a questionnaire.
Whether based on hard data, anecdotal data, or personal experience, we are motivated by continual improvement.
We also believe that for maximum effectiveness, any social impact activity must match the skills and passions of the social workers. If data suggested school uniforms were more beneficial than engineering-training, then we’d let someone else handle the uniforms, as our project embodies passion and skills in engineering-training.
We reject the notion of a silver bullet. We recognize that social problems are usually the result of a mix of causes, and require a mix of fixes. By that logic, a mix of engineering-training plus other fixes (say, for example, school uniforms) will have greater impact than one fix alone. Therefor, it’s logically justified to continue our work with engineering-training, and let others handle other fixes (such as, for example, school uniforms).
“is training teens in California the best way to do that? If you were instead to train teens in Dakar, or New Delhi, or Mexico City, would that get more “bang for your buck?”
-- Our delivery model is designed for a global presence. Our pilot program is in California, but we are developing a model that can scale out to marginalized communities all over the world at minimal cost. Our model can reach teens in California, Dakar, New Delhi, and Mexico City.
Apparently, my original post received downvotes. I’d like to understand why.
The Intro EA Program might be a good way to get more familiar with some ideas and mental tools/models that are common in EA. Doing Good Better is an introduction to a lot of EA ideas that is fairly easy to read. Scout Mindset would also be a good book to read (less for understanding EA, and more for understanding an approach to the world of figuring out what is true, rather than fighting for what I believe to be true).
If you are in San Francisco (or the greater Bay Area) then it might be feasible for you to meet other EAs in person and get input on how to make your project/effort better.
If you want to adapt some EA-esque practices, then measuring your impact (such as lives saved per 10,000 dollars spent, or years of incarceration prevented per workshop, or job placements achieved per SOMETHING) could be a good start. It is hard to do monitoring and evaluation well, but I’d encourage you to not let the perfect be the enemy of the good. Once you know your impact and input per unit of impact, then you can compare, optimize, pivot, and so on.
Cause neutrality is a fairly important idea in EA. While I don’t think any person is truly and absolutely neutral about causes (we all have some things that resonate with us more, or pet projects that we simply care more about), in my mind the Platonic ideal of an EA would do a pretty good job of setting aside personal biases/connections/preferences and simply do what accomplished the most. I’m certainly not there (I work in HR for crying out loud 😅), but it is an aspirational ideal to strive for.
In general the bar for EA projects is set pretty high. A lot of EAs might look at an electrical engineering training program and think something like:
It is great to help these kids, but for the same amount of money/time/effort as helping these ten kids each learn how to build a boombox, I could help ten other kids get an extra 15 years of healthy life. One of these needs is gonna go unmet regardless (because we have limited resources), so I’m gonna make the tough choice and put my resources in a project that will have a bigger impact (while at the same time desperately wishing that I could fully fund/support both of these projects, because from what I can tell they both make the world a better place).
To measure our impact, we’d have to fully implement our vision.
We created a successful pilot. Now we need to raise funds to fully implement our vision.
“Healthy life”? You mean, access to food and water? Great! That’s essential.
But this silver bullet idea you’re promoting isn’t possible. People need nutrition AND education. Does EA really promote the idea that we have to choose between nutrition AND education?
Most problems have multiple causes, and need multiple solutions. The idea that people should all support one thing is a grave disservice to communities in need. We need people to support a MIX of solutions.
EA sounds anti-innovation. One of the biggest innovation-killers is the inclination of funders to support a handful of large projects. Large projects are complacent and conservative. Only fresh new projects have the courage to innovate.
The idea that people should eliminate their personal biases/connections/preferences is absurd and counter-effective. Effective social impact requires that people on the front lines apply their PASSION and SKILLS. That’s what they’re good at.
It sounds like this isn’t the feedback you’re hoping for, and that sucks, but I think people aren’t sold on your model specifically. Check out Charity Entrepreneurship as an example of a nonprofit incubator for innovative / unusual ideas!
The EA community might be an appropriate community if you, but it is hard to say this with any level of confidence since I know so little about your projects, goals, impacts, motives, etc.
A good question to ask yourself: what if you had some evidence showing that your project was having a negligible influence on the result that you wanted? Or even worse, what if it showed that your project had a negative impact? If you would change your behavior based on these facts, then that is an indicator that EA might be a good fit.
Another thing to ask yourself: if your goal is to give people more economic opportunities, is training teens in California the best way to do that? If you were instead to train teens in Dakar, or New Delhi, or Mexico City, would that get more “bang for your buck?” Or maybe if you were to fund school uniforms for young girls in rural [insert country here], for each dollar spent would that generate more in lifetime earnings than your electrical engineering training program. These are the kinds of questions that an EA might ask himself/herself about a project like yours.
Our project is based on finding the optimum solution to a problem.
However, we reject the notion of data as god. Data is only as good as assumptions, collection, and interpretation. A deep knowledge of a particular target community, based on years of experience with the community, may be far more reliable than a particular set of data generated by a questionnaire.
Whether based on hard data, anecdotal data, or personal experience, we are motivated by continual improvement.
We also believe that for maximum effectiveness, any social impact activity must match the skills and passions of the social workers. If data suggested school uniforms were more beneficial than engineering-training, then we’d let someone else handle the uniforms, as our project embodies passion and skills in engineering-training.
We reject the notion of a silver bullet. We recognize that social problems are usually the result of a mix of causes, and require a mix of fixes. By that logic, a mix of engineering-training plus other fixes (say, for example, school uniforms) will have greater impact than one fix alone. Therefor, it’s logically justified to continue our work with engineering-training, and let others handle other fixes (such as, for example, school uniforms).
“is training teens in California the best way to do that? If you were instead to train teens in Dakar, or New Delhi, or Mexico City, would that get more “bang for your buck?”
-- Our delivery model is designed for a global presence. Our pilot program is in California, but we are developing a model that can scale out to marginalized communities all over the world at minimal cost. Our model can reach teens in California, Dakar, New Delhi, and Mexico City.
Apparently, my original post received downvotes. I’d like to understand why.
The Intro EA Program might be a good way to get more familiar with some ideas and mental tools/models that are common in EA. Doing Good Better is an introduction to a lot of EA ideas that is fairly easy to read. Scout Mindset would also be a good book to read (less for understanding EA, and more for understanding an approach to the world of figuring out what is true, rather than fighting for what I believe to be true).
If you are in San Francisco (or the greater Bay Area) then it might be feasible for you to meet other EAs in person and get input on how to make your project/effort better.
If you want to adapt some EA-esque practices, then measuring your impact (such as lives saved per 10,000 dollars spent, or years of incarceration prevented per workshop, or job placements achieved per SOMETHING) could be a good start. It is hard to do monitoring and evaluation well, but I’d encourage you to not let the perfect be the enemy of the good. Once you know your impact and input per unit of impact, then you can compare, optimize, pivot, and so on.
Cause neutrality is a fairly important idea in EA. While I don’t think any person is truly and absolutely neutral about causes (we all have some things that resonate with us more, or pet projects that we simply care more about), in my mind the Platonic ideal of an EA would do a pretty good job of setting aside personal biases/connections/preferences and simply do what accomplished the most. I’m certainly not there (I work in HR for crying out loud 😅), but it is an aspirational ideal to strive for.
In general the bar for EA projects is set pretty high. A lot of EAs might look at an electrical engineering training program and think something like:
To measure our impact, we’d have to fully implement our vision.
We created a successful pilot. Now we need to raise funds to fully implement our vision.
“Healthy life”? You mean, access to food and water? Great! That’s essential.
But this silver bullet idea you’re promoting isn’t possible. People need nutrition AND education. Does EA really promote the idea that we have to choose between nutrition AND education?
Most problems have multiple causes, and need multiple solutions. The idea that people should all support one thing is a grave disservice to communities in need. We need people to support a MIX of solutions.
EA sounds anti-innovation. One of the biggest innovation-killers is the inclination of funders to support a handful of large projects. Large projects are complacent and conservative. Only fresh new projects have the courage to innovate.
The idea that people should eliminate their personal biases/connections/preferences is absurd and counter-effective. Effective social impact requires that people on the front lines apply their PASSION and SKILLS. That’s what they’re good at.
It sounds like this isn’t the feedback you’re hoping for, and that sucks, but I think people aren’t sold on your model specifically. Check out Charity Entrepreneurship as an example of a nonprofit incubator for innovative / unusual ideas!