Article I wrote about the recent Tory Leadership debates:
I was nearly gonna post this article so glad you already have. I think it provides an interesting framework for understand peoples worldview (telos, existential threat etc). I have found it to be useful in discussing people’s veiws with them: “What do you think is the purpose of life, what do you fear? etc”
Thank you for your work. It seems like a really important thing to study. Thank you for taking the time to lay your your plans so clearly.
Do you think your work will at any point reach onto how individuals could live that would make them more happu or have greater well being? I think there is room for publising a kind of workflow/ lifehacks to help peoeple know how their lives could be better. I acknolwedge that’s not what you speak about here but it seems adjacent. Perhaps another reader could point me in the direction of this.
We think well-being consists in happiness, defined as a positive balance of enjoyment over suffering. Understood this way, this means that when we reduce misery, we increase happiness.
Sure though there are some kinds of misery you don’t want to reduce. I could choose not to attend my fathers funeral and that would reduce misery. Do you have any idea how you will attempt to account for “good sadness” in any way? If you will avoid those kinds of interventions, how will you choose your interventions and how will you avoid bias in this?
Non EA Parody Video I made
I made a Brexit parody of Remix to Ignition. You folks are a community I’m part of and I think sharing what your are proud of (or what is uniquely you), is a great part of community life.
(As an aside I’d like to do some EA rap if I could think of a good idea of how to do it. Alternatively if you want rap marketing of an EA organisation or rap at an event then we can talk)
Regarding there being answers. That’s good to know, I guess I will search for them—also I’ve just found less wrong—which is useful.
I’ll check out those links.
You might be right about a broad discussion. If it turns out that issues haven’t been covered I might come back and write a more specific piece.
I have not spent any time in local EA communities. I’d like to though, but that will involve working out where I’m going to live next.
Thanks for your time.
Yeah I wonder if there is any home-finding app in the EA community. I’d love to live with some people with similar views. (I am equally wary of going from one strict ideology to another but there we are)
In this sense I think the govt should create appropriate incentives for long term committed relationships where children are concerned—perhaps like a no claims bonus (an increasing yearly benefit of not crashing a car in the UK) for each year parents with children who stay together until their last child is 18?
It’s fair to remove comments one no longer supports but if someone did say this, I’d agree. :P I guess it stands out a mile away.
Hey thanks for replying,
Sure, it’s a question of maximising effect. I don’t know what is best. 80k say it’s not the most effective. I suppose you’d have to ask them how that explanation works.
Certainly it’s a better thing to do that working building bombs, but as to if it’s as good as AI policy, 80k says no.
What do you think?
A core question for me is still, “Is EA’s main aim to grow to affect govt policy?”. This would be able to deal with problems that EA organisations work on at an incentives level such that that non-EAs would be properly motivated to solve problems that affect all our wellbeing.
In that sense, correcting an an externality is better than lobbying firms/consumers to ignore it (which is roughly what we currently do). Am I wrong here? If growth isn’t EA’s main aim, why not? Something doesn’t add up.
I suppose the best answer I can expect is “we don’t know that’s more effective” thanks to aaron who showed me how Givewell is starting to look at this . But at some level this will stop being true, if EA had 51% support then we could just vote throw the measures we wanted (some ethical nuances).
So the secondary question is, do we have any idea when this shift from lobbying individuals to lobbying/participating in govt ought to take place. How many EAs should exist in a country before they make a concerted effort to lobby directly. That seems a fairly crucial detail.
Is there a particular article or statement from an organization that made you think influencing legislation isn’t one of the movement’s aims?
I suppose from what I’ve read I get the sense it’s mainly about careers and philanthropy rather than lobbying/activism, though that may be a case of what you later describe. Also @anonymous_EA ’s post does suggest this idea:
EA growth itself is much less prioritized now than it was a few years ago.
Thanks for your time, I’ll look into the influencing policy stuff.
I’d like someone to research and plot the graph fully and do some tests. Let’s see, I guess.
Thanks for writing this.
I think we should seek to maximise both our own and everyone’s wellbeing and that that probably means productivity is good for others and self care/things we enjoy are good for us. I’m not quite sure if you agree or disagree.
I think we need to learn to be satisfied with good but also strive for better. That’s a hard balance, though its worth remembering that if we have interesting satisfying jobs, disposable income, a few hours of free time each day and safety for ourselves and those we love, we are doing really well worldwide and so it’s worth working for the good of others who are less well off and for our own benefit.
Interesting post. Thank you for writing it. Attractive graphs.
I wonder if there could be a kind of “trip advisor” type badge to recommend how well charities/interventions are doing in such a way as to encourage them to improve.
You mention it, but a key strength and issue is that EA is exclusive. It only wants to the the most good, so it only recommends the best charities, but it therefore doesn’t encourage middling charities/interventions to be better.
There is a hard question here which is, does EA want those charities to get better or does it want them to end? Do we look down on individuals and organisations backing or using inefficient approaches, have we becomes something akin to a purity cult? To do so might be unreasonable since refusing to engage with successful middle-efficiency highly-backed approaches could be a failure to improve them and do more good.
The real kicker then, I think is, do you get more good per $ by increasing the high end or shifting the graph to the right? Has anyone done any research on this? However, it seems relatively useful to not become sneery/superior towards middle-efficiency approaches and it doesn’t cost much (I think, though perhaps I’m wrong) to be gracious to those we think are doing some good but not as much as they could be.
How can one incentivise the right kind of behaviour here? This isn’t a zero sum game—we can all win, we can all lose. How do we inculcate the market with that knowledge such that the belief that only one of us can win doesn’t make us all more likely to lose?
Off the top of my head:
Some sort of share trading scheme.
Some guarantee from different AI companies that whichever one reaches AI first will employ people from the others.
I suppose I don’t understand why the aim isn’t to grow the movement more to eventually influence legislation.
Likewise if that will one day be the aim at what point will the switch come?
I hate that I made you feel that way.
No need to apologise, I didn’t mean my original comment individually, more as a kind of “gee whiz” to how much the blog bombed in general. But as I say, that’s okay, noone was unkind, they just didn’t like what I wrote. I think it can be easy for communities like this to be very dog-eat-dog so I think a little vulnerability/honesty might go a long way. Recently I have learned when I am insecure enough to be tempted to “man up” it’s often better to show vulnerability.
What is the issues of things voted against demographics, if I might understand. Let’s say tall EAs want a slightly different thing than short EAs. Scaling comments by height as if it were a survey means that if we have a fewer than representative number of tall EAs their votes would get weighed as more. That would mean the top posts/comments would be more likely to contain things which appealed to them, since (if they comprised half the representative population) they would control half the weighted votes. So new tall EAs would visit a site closer in tone/culture to what they would enjoy.
I don’t see why this would result in a less rational site, but if certain issues it turned out were culturally more important to short EAs, it would be good to notice that, rather than thinking it was about rationality.
Frankly, most counter positions seem to lead to “representative voting control by minority groups would lead to a worse site” and I don’t understand why that is the case. If they lead to increased growth in EA in minority groups that seems a good thing.
So my slightly clunky analogy aside, what do you think?
Do we acknowledge our activities will change as we grow? Are we transparent about our mission?