I like this because it is a low overhead way for high impact people to organise retreats, holidays etc. with aligned people and this is plausibly very valuable for some people. It will also nudge people to look after themselves and spend time in nice plaes which on the current margin is maybe a good thing, idk.
Fwiw I think that LTFF would fund all of the ‘example use cases for guests’ anyway for someone reasoably high impact/value aligned anway, so I think this is more about nudges than actually creating opportunities that don’t already exist.
Sure, but it’s pretty reasonable to think that Kat thinks that majority of value will come from helping longtermists given that that is literally that reason she set up nonlinear.
I think that we should just bite the bullet here and recognise that the vast majority of smart dedicated people trying very hard to use reason and evidence to do the most good are working on improving the long run future.
I imagine that if you listed some people that are working on NT things I would think they mostly either:
Haven’t been around EA very long, so haven’t been exposed to LT ideas properly
Don’t think about things well
Are very averse to taking weird ideas seriously
Are actually working on LT things but it doesn’t look like this for optics reasons
To be clear, there are plenty of people working on LT issues who have some/all of the above problems and I am also not very excited about them or their work.
I would be super interested in seeing your list though, I’m sure there are some exceptions.
My previous comment may well ‘deserve’ to be down voted, but given that it has been heavily down voted I would appreciate it a lot if some of the downvoters could explain why they downvoted the comment.
“Don’t think about things well” is probably what caused it. It makes it hard not to read your post as NT EA’s being stupid. If you removed that, your comment would have basically the same meaning, except it would be because of lack of exposure (or not taking weird ideas seriously) and not something that feels like a proxy for stupidity.
Full disclosure: I’m thinking about writing up about ways in which EA’s focus on impact, and the amount deference to high status people, creates cultural dynamics which are very negative for some of its members.
I think that we should just bite the bullet here and recognise that the vast majority of smart dedicated people trying very hard to use reason and evidence to do the most good are working on improving the long run future.
It’s a divisive claim, and not backed up with anything. By saying ‘bite the bullet’, it’s like you’re taunting the reader to say “if you don’t recognise this, you’re willfully avoiding the truth / cowardly in the face of truth”. Whereas for such a claim I think onus is on you to back it up.
It’s also quite a harsh value judgement of others, and bad for that reason—see below.
To be clear, there are plenty of people working on LT issues who have some/all of the above problems and I am also not very excited about them or their work.
This implies “some people matter, others do not”. It’s unpleasant and a value judgement, and worth downvoting on that alone. It also assumes such judgements can easily be made of others, whether they “Don’t think about things well”. I think I’ve pretty good judgement of people and how they think (it’s part of my job to have it), but I wouldn’t make these claims about someone as if it’s definitive and then decide whether to engage / disengage with them off the bat of that.
But it’s even more worth downvoting given how many—in my experience, I’ll caveat—EAs end up disconnecting from the community or beat themselves up because they feel the community makes value judgements about them, their worth, and whether they’re worth talking to. I think it’s bad for all the ‘mental health--> productivity --> impact’ reasons, but most importantly because I think not hurting others or creating conditions in which they would be hurt matters. This statement you made seems to me to be very value judgementy, and would make many people feel threatened and less like expressing their thoughts in case they would be accused of ‘not thinking well’, so I certainly don’t want it going unchallenged, hence downvoting it.
I would be super interested in seeing your list though, I’m sure there are some exceptions.
I think making a list of people doing things, and ranking them against your four criteria above, and sharing that with other people would bring further negative tones to the EA community.
You directly said that all the smart dedicated people are working on LT causes or interventions and also distinctly said that NT people aren’t very good or thought out and that’s why they are NT.
Moving to explicit slagging of different cause areas isn’t an orthodox or acceptable view IMO in EA, even among “pure” LT or NT talking about other respective areas.
Things get metal in a world without these norms, which is why EA leaders pay a lot of attention to maintaining them. Very strong people, who don’t perfectly agree with every possible aspect of LT implementations, are very happy to see strong AI safety work and see it grow greatly. Some NT people are sitting on pretty powerful and unvoiced novel criticisms on LT (and vice versa). These ideas can’t really be socialized and won’t be said.
Note that this does not involve horse trading or anything like that.
I would expect to provide aid or resources to other EAs working on their cause areas I am not interested in and vice versa. Not doing so is defection.
I guess another aspect is that the “large” resources that will need to be deployed to LT, like 7 figure comp for individual talent and small armies of PA, comms and operations, producing reams of impressive looking applied math output, outreach, public figures, and other work, is important. All of the above is like the expenses of medium size startup by the way.
Yet, there is anxiety among NT, who say, give up high six figure salaries, staff and can’t quite compete in a material sense, that the situation in the above paragraph will just runaway, with the resulting ”complex” leading to certain excesses, like insular, uncollegial and self interested views that consume other areas.
All the above is sort of known, has been thought about for a long time and the proximate issues are manageable, even stale.
So, anyways, sort of sitting in “the outer loop”, looking at the situation in the first two paragraphs, a reasonable view is to be concerned about the related anxiety and pressure manifesting in an unhealthy way, coloring and providing an unstable tail wind to topics or memes like “vultures”, “funding”, “trust” etc. These are genuinely important and valuable topics that need examination. But these anxieties and tailwinds risk poisoning and spoiling this thinking. They also provide a powerful soapbox for posturing and other malign behaviour.
In total, this might ultimately may foul up important transitions with bad effects on everyone.
So like, even if everything in your comment was true, comments just that go straight to “NT are bad”, are not helpful.
I didn’t downvote. The negative component of the impression I got is it seems vaguely rude to my model of neartermists, and the first paragraph doesn’t quite seem to be true even if the most impact is produced by longtermists, because people could have bad epistemics or different morals and still be smart and dedicated. Also it should be out of scope of this discussion.
Also, even if most of the impact is produced by longtermists, people working in global health or animal welfare are often like 60% of the way there epistemically compared to the average person, and from a longtermist perspective the reason why they aren’t producing much impact is just that reality is unfair and effectiveness is a conjunction of multipliers.
I think some common reasons to not primarily focus on improving the long-run future, such as person-affecting views, non-totalist utilitarian beliefs and the cluelessness objection don’t fit into the 4 categories you described.
I like this because it is a low overhead way for high impact people to organise retreats, holidays etc. with aligned people and this is plausibly very valuable for some people. It will also nudge people to look after themselves and spend time in nice plaes which on the current margin is maybe a good thing, idk.
Fwiw I think that LTFF would fund all of the ‘example use cases for guests’ anyway for someone reasoably high impact/value aligned anway, so I think this is more about nudges than actually creating opportunities that don’t already exist.
Not all EAs work on the long-term future
Sure, but it’s pretty reasonable to think that Kat thinks that majority of value will come from helping longtermists given that that is literally that reason she set up nonlinear.
Also, EAIF will fund these things.
I think that we should just bite the bullet here and recognise that the vast majority of smart dedicated people trying very hard to use reason and evidence to do the most good are working on improving the long run future.
I imagine that if you listed some people that are working on NT things I would think they mostly either:
Haven’t been around EA very long, so haven’t been exposed to LT ideas properly
Don’t think about things well
Are very averse to taking weird ideas seriously
Are actually working on LT things but it doesn’t look like this for optics reasons
To be clear, there are plenty of people working on LT issues who have some/all of the above problems and I am also not very excited about them or their work.
I would be super interested in seeing your list though, I’m sure there are some exceptions.
My previous comment may well ‘deserve’ to be down voted, but given that it has been heavily down voted I would appreciate it a lot if some of the downvoters could explain why they downvoted the comment.
“Don’t think about things well” is probably what caused it. It makes it hard not to read your post as NT EA’s being stupid. If you removed that, your comment would have basically the same meaning, except it would be because of lack of exposure (or not taking weird ideas seriously) and not something that feels like a proxy for stupidity.
Disclaimer: I didn’t downvote you
Full disclosure: I’m thinking about writing up about ways in which EA’s focus on impact, and the amount deference to high status people, creates cultural dynamics which are very negative for some of its members.
It’s a divisive claim, and not backed up with anything. By saying ‘bite the bullet’, it’s like you’re taunting the reader to say “if you don’t recognise this, you’re willfully avoiding the truth / cowardly in the face of truth”. Whereas for such a claim I think onus is on you to back it up.
It’s also quite a harsh value judgement of others, and bad for that reason—see below.
This implies “some people matter, others do not”. It’s unpleasant and a value judgement, and worth downvoting on that alone. It also assumes such judgements can easily be made of others, whether they “Don’t think about things well”. I think I’ve pretty good judgement of people and how they think (it’s part of my job to have it), but I wouldn’t make these claims about someone as if it’s definitive and then decide whether to engage / disengage with them off the bat of that.
But it’s even more worth downvoting given how many—in my experience, I’ll caveat—EAs end up disconnecting from the community or beat themselves up because they feel the community makes value judgements about them, their worth, and whether they’re worth talking to. I think it’s bad for all the ‘mental health--> productivity --> impact’ reasons, but most importantly because I think not hurting others or creating conditions in which they would be hurt matters. This statement you made seems to me to be very value judgementy, and would make many people feel threatened and less like expressing their thoughts in case they would be accused of ‘not thinking well’, so I certainly don’t want it going unchallenged, hence downvoting it.
I think making a list of people doing things, and ranking them against your four criteria above, and sharing that with other people would bring further negative tones to the EA community.
You directly said that all the smart dedicated people are working on LT causes or interventions and also distinctly said that NT people aren’t very good or thought out and that’s why they are NT.
Moving to explicit slagging of different cause areas isn’t an orthodox or acceptable view IMO in EA, even among “pure” LT or NT talking about other respective areas.
Things get metal in a world without these norms, which is why EA leaders pay a lot of attention to maintaining them. Very strong people, who don’t perfectly agree with every possible aspect of LT implementations, are very happy to see strong AI safety work and see it grow greatly. Some NT people are sitting on pretty powerful and unvoiced novel criticisms on LT (and vice versa). These ideas can’t really be socialized and won’t be said.
Note that this does not involve horse trading or anything like that.
I would expect to provide aid or resources to other EAs working on their cause areas I am not interested in and vice versa. Not doing so is defection.
I guess another aspect is that the “large” resources that will need to be deployed to LT, like 7 figure comp for individual talent and small armies of PA, comms and operations, producing reams of impressive looking applied math output, outreach, public figures, and other work, is important. All of the above is like the expenses of medium size startup by the way.
Yet, there is anxiety among NT, who say, give up high six figure salaries, staff and can’t quite compete in a material sense, that the situation in the above paragraph will just runaway, with the resulting ”complex” leading to certain excesses, like insular, uncollegial and self interested views that consume other areas.
All the above is sort of known, has been thought about for a long time and the proximate issues are manageable, even stale.
So, anyways, sort of sitting in “the outer loop”, looking at the situation in the first two paragraphs, a reasonable view is to be concerned about the related anxiety and pressure manifesting in an unhealthy way, coloring and providing an unstable tail wind to topics or memes like “vultures”, “funding”, “trust” etc. These are genuinely important and valuable topics that need examination. But these anxieties and tailwinds risk poisoning and spoiling this thinking. They also provide a powerful soapbox for posturing and other malign behaviour.
In total, this might ultimately may foul up important transitions with bad effects on everyone.
So like, even if everything in your comment was true, comments just that go straight to “NT are bad”, are not helpful.
I didn’t downvote. The negative component of the impression I got is it seems vaguely rude to my model of neartermists, and the first paragraph doesn’t quite seem to be true even if the most impact is produced by longtermists, because people could have bad epistemics or different morals and still be smart and dedicated. Also it should be out of scope of this discussion.
Also, even if most of the impact is produced by longtermists, people working in global health or animal welfare are often like 60% of the way there epistemically compared to the average person, and from a longtermist perspective the reason why they aren’t producing much impact is just that reality is unfair and effectiveness is a conjunction of multipliers.
I think some common reasons to not primarily focus on improving the long-run future, such as person-affecting views, non-totalist utilitarian beliefs and the cluelessness objection don’t fit into the 4 categories you described.