I think you’re imagining that the longtermists split off and then EA is basically as it is now, but without longtermism. But I don’t think that’s what would happen. If longtermist EAs who currently work on EA-branded projects decided to instead work on projects with different branding (which will plausibly happen; I think longtermists have been increasingly experimenting with non-EA branding for new projects over the last year or two, and this will probably accelerate given the last few months), EA would lose most of the people who contribute to its infrastructure and movement building.
My guess is that this new neartermist-only EA would not have the resources to do a bunch of things which EA currently does—it’s not clear to me that it would have an actively maintained custom forum, or EAGs, or EA Funds. James Snowden at Open Phil recently started working on grantmaking for neartermist-focused EA community growth, and so there would be at least one dedicated grantmaker trying to make some of this stuff happen. But most of the infrastructure would be gone.
I agree that longtermism’s association with EA has some costs for neartermist goals, but it’s really not clear to me that the association is net negative for neartermism overall. Perhaps we’ll find out.
(I personally like the core EA ideas, and I have learned a lot from engaging with non-longtermist EA over the last decade, and I feel great fondness towards some neartermist work, and so from a personal perspective I like the way things felt a year ago better than a future where more of my peers are just motivated by “holy shit, x-risk” or similar. But obviously we should make these decisions to maximize impact rather than to maximize how much we enjoy our social scenes.)
My guess is that this new neartermist-only EA would not have the resources to do a bunch of things which EA currently does—it’s not clear to me that it would have an actively maintained custom forum, or EAGs, or EA Funds. James Snowden at Open Phil recently started working on grantmaking for neartermist-focused EA community growth, and so there would be at least one dedicated grantmaker trying to make some of this stuff happen. But most of the infrastructure would be gone.
This paragraph feels pretty over the top. When you say “resources” I assume you mean that neartermist EAs wouldn’t have enough money to maintain the Forum, host EAGs, run EA Funds, etc. This doesn’t feel either that accurate, partially on the account that I don’t think those infrastructure examples are particularly resource or labour-intensive, or sufficient money is available to make them happen:
Forum: Seems like 1-2 people are working FTE on maintaining the forum. This doesn’t seem like that much at all and to be frank, I’m sure volunteers could also manage it just fine if necessary (assuming access to the underlying codebase).
EA Funds: Again, 1-2 FTE people working on this, so I think this is hardly a significant resource drain, especially since 2.5 of the funds are neartermist.
EAGs: Yes, definitely more expensive than the above two bits of infrastructure, but also I know at least one neartermist org is planning a conference (tba) so I don’t think this number will fall to 0. More likely it’ll be less than it is right now, but one could also reasonably think we currently have more than what is optimally cost-effective.
Overall it seems like you either (1) Think neartermist EA has access to very few resources relative to longtermist EA or (2) that longtermist EA doesn’t have as much direct work to spend money on so by default they spend a higher % of total funds on movement infrastructure?
For (1): I would be curious to hear more about this, as seems like without FTX, the disparities in neartermist and longtermist funding aren’t huge (e.g. I think no more than 10x different?). Given that OP / Dustin are the largest funders, and the longtermist portfolio of OP is likely going to be around 50% of OP’s portfolio, this makes me think differences won’t be that large without new longtermist-focused billionaires.
For (2): I think this is largely true, but again I would be surprised if this led to longtermist EA being willing to spend 50x more than neartermist EA (I could imagine a 10x difference). That said, a few million for neartermist EA, which I think is plausible, would cover a lot of core infrastructure.
How come you think that? Maybe I’m biased from spending lots of time with Charity Entrepreneurship folks but I feel like I know a bunch of talented and entrpreneurial people who could run projects like the ones mentioned above. If anything, I would say neartermist EA has a better (or at least, longer) track record of incubating new projects relative to longtermist EA!
I also think that the value of a nice forum, EAGs, and the EA Funds is lower for non-longtermists (or equivalently the opportunity cost is higher).
E.g. if there was no forum, and the CE folks had extra $ and talent, I don’t think they would make one. (Or EAGs, or possibly ACE EA funds). Also, the EA fund For global health and development is already pretty much just the GiveWell All Grants Fund.
Also worth noting that “all four leading strands of EA — (1) neartermist human-focused stuff, mostly in the developing world, (2) animal welfare, (3) long-term future, and (4) meta — were all major themes in the movement since its relatively early days, including at the very first “EA Summit” in 2013 (see here), and IIRC for at least a few years before then.” (Comment by lukeprog)
I think a split proposal is more realistic on a multi-year timeframe. Stand up a meta organization for neartermism now, and start moving functions over as it is ready. (Contra the original poster, I would conceptualize this as neartermism splitting off; I think it would be better to fund and grow new neartermist meta orgs rather than cripple the existing ones with a longtermist exodus. I also think it may be better off without the brand anyway.)
Neartermism has developed meta organizations from scratch before, of course. From all the posts about how selective EA hiring practices are, I don’t sense that there is insufficient room to staff new organizations. More importantly, meta orgs that were distanced from the longtermist branch would likely attract people interested in working in GHD, animal advocacy, etc. who wouldn’t currently be interested in affiliating with EA as a whole. So you’d get some experienced hands and a good number of new recruits . . . which is quite a bit more than neartermism had when it created most of the current meta.
In the end, I think neartermism and longtermism need fundamentally different things. Trying to optimize the same movement for both sets of needs doesn’t work very well. I don’t think the need to stand up a second set of meta organizations is a sufficient reason to maintain the awkward marriage long-term.
Stand up a meta organization for neartermism now, and start moving functions over as it is ready.
As I’ve said before, I agree with you that this looks like a pretty good idea from a neartermist perspective.
Neartermism has developed meta organizations from scratch before, of course.
[...]
which is quite a bit more than neartermism had when it created most of the current meta.
I don’t think it’s fair to describe the current meta orgs as being created by neartermists and therefore argue that new orgs could be created by neartermists. These were created by people who were compelled by the fundamental arguments for EA (e.g. the importance of cause prioritization, cosmopolitanism, etc). New meta orgs would have to be created by people who are compelled by these arguments but also not compelled by the current arguments for longtermism, which is empirically a small fraction of the most energetic/ambitious/competent people who are compelled by arguments for the other core EA ideas.
More importantly, meta orgs that were distanced from the longtermist branch would likely attract people interested in working in GHD, animal advocacy, etc. who wouldn’t currently be interested in affiliating with EA as a whole. So you’d get some experienced hands and a good number of new recruits
I think this is the strongest argument for why neartermism wouldn’t be substantially weaker without longtermists subsidizing its infrastructure.
Two general points:
There are many neartermists who I deeply respect; for example, I feel deep gratitude to Lewis Bollard from the Open Phil farmed animal welfare team and many other farmed animal welfare people. Also, I think GiveWell seems like a competent org that I expect to keep running competently.
It makes me feel sad to imagine neartermists not wanting to associate with longtermists. I personally feel like I am fundamentally an EA, but I’m only contingently a longtermist. If I didn’t believe I could influence the long run future, I’d probably be working on animal welfare; if I didn’t believe that there were good opportunities there, I’d be working hard to improve the welfare of current humans. If I believed it was the best thing to do, I would totally be living frugally and working hard to EtG for global poverty charities. I think of neartermist EAs as being fellow travelers and kindred spirits, with much more in common with me than almost all other humans.
While neartermists may be a “small fraction” of the pie of “most energetic/ambitious/competent people,” that pie is a lot larger than it was in the 2000s. And while funding is not a replacement for good people, it is (to a point) a force multiplier for the good people you have. The funding situation would be much better than it was in the 2000s. In any event, I am inclined to think that many neartermists would accept B-list infrastructure if that meant that the infrastructure would put neartermism first -- so I don’t think the infrastructure would have to be as good.
I’m just not sure if there is another way to address some of the challenges the original poster alludes to. For the current meta organizations to start promoting neartermism when they believe it is significantly less effective would be unhealthy from an epistemic standpoint. Taking the steps necessary to help neartermism unlock the potential in currently unavailable talent/donor pools I mentioned above would—based on many of the comments on this forum—impair both longtermism’s epistemics and effectiveness. On the other hand, sending the message that neartermist work is second-class work is not going to help with the recruitment or retention of neartermists. It’s not clear to me what neartermism’s growth (or maintenance) pathway is under current circumstances. I think the crux may be that I put a lot of stock in potentially unlocking those pools as a means of creating counterfactual value.
I understand that a split would be sad, although I would view it more as a sign of deep respect in a way—as an honoring of longtermist epistemics and effectiveness by refusing to ask longtermists to compromise them to help neartermism grow. (Yes, some of the reason for the split may have to do with different needs in terms of willingness to accept scandal risk, but that doesn’t mean anyone thinks most longtermists are scandalous.)
My guess is that this new neartermist-only EA would not have the resources to do a bunch of things which EA currently does—it’s not clear to me that it would have an actively maintained custom forum, or EAGs, or EA Funds. James Snowden at Open Phil recently started working on grantmaking for neartermist-focused EA community growth, and so there would be at least one dedicated grantmaker trying to make some of this stuff happen. But most of the infrastructure would be gone.
My guess would be that the people who want an EA-without-longtermism movement would bite that bullet.
The kind of EA-without-longtermism movement that is being imagined here would probably need less of those things? For example, going to EAG is less instrumentally useful when all you want is to donate 10% of your income to the top recommended charity by GiveWell, and more instrumentally useful when you want to figure out what AI safety research agenda to follow.
For example, going to EAG is less instrumentally useful when all you want is to donate 10% of your income to the top recommended charity by GiveWell, and more instrumentally useful when you want to figure out what AI safety research agenda to follow.
Like, do you really think this is a characterization of non-longtermist activities that suggests to proponents of the OP, that your views are informed?
(In a deeper sense, this reflects knowledge necessary for basic cause prioritization altogether.)
Donating 10% of your income to GiveWell was just an example (those people exist, though, and I think they do good things!), and this example was not meant to characterize non-longtermists.
To give another example, my guess would be that for non-longtermist proponents of Shrimp Welfare EAG is instrumentally more useful.
To be clear, I might not be what I seem in more than one way. I also don’t want to dig into this too far, but I’m not sure some people appreciate the level of “neartermist” sentiment aligned with the OP.
Well, no, your comment isn’t true.
If longtermist EAs who currently work on EA-branded projects decided to instead work on projects with different branding (which will plausibly happen; I think longtermists have been increasingly experimenting with non-EA branding for new projects over the last year or two, and this will probably accelerate given the last few months), EA would lose most of the people who contribute to its infrastructure and movement building.
I think we need to zoom out here, as you might be directly making the case for the OP?
Zooming out, there’s a 5-10 page essay here about the feelings of neartermists, who over the last 2+ years, are not super thrilled about seeing people talent change over to LT causes, and potential donors intercepted.
But just over the past few months, the neartermists, who have never heard of or would associate with Michael Vassar, who have no interest in discussing HBD, do not have claim to FTX money, have just been seared. (Also, the “castle”, Vice articles about Neo-Nazis, etc. but I can’t figure out how to work them in here, this is too long). This is made worse by the quality and level of discussion, such as giant, unnecessary threads into the concept of weirdness and personal lifestyles. Neartermists have no interest in judging or litigating most of these issues. They “can’t count that low”.
If the neartermists believe they are exposed, while you’re explaining the LT are sort of evading this by being, well resourced, non-labeled EA orgs, what’s being said here?
I think longtermists have been increasingly experimenting with non-EA branding for new projects over the last year or two, and this will probably accelerate given the last few months
Increasingly? Which actual well esteemed new LT org is explicitly EA labelled? I can’t easily think of any. There is literally nothing besides CEA in the grants?
My guess is that this new neartermist-only EA would not have the resources to do a bunch of things which EA currently does
But this is exactly what the OP is saying and wants to solve. Unless the strategy here is implicitly hold neartermist “hostage”, well, yes, exactly, the neartermists want the similar resources to get these things, and escape this situation you sort of laid out succinctly.
But obviously we should make these decisions to maximize impact
Look, your Redwood Research and Paul’s ARC are two strong object level safety orgs[1]. How many other object level orgs are there doing good work in AI safety?
Apart from you, orgs like LessWrong and MIRI, are marginal, even in their worldview. What has MIRI’s output been for the last 3 years? These highly knit groups can’t stop some of their own community moving into two lush, for-profit organizations that are associated with EA. One or more of these orgs are involved in, if not literally spawned, a new LLM arms race.
Who is John Carmack? What did he decide to do after reading Superintelligence?
For a host of reasons, we both know the T in ITN, could be a big topic for a debate.
to maximize how much we enjoy our social scenes.)
Is the ellipsis that people are choosing neartermist areas based on social circles?
But isn’t the truth sort of exactly the opposite? I think if we studied the social graph and lifestyle choices, the kinds of we would find most neartermists have looser affiliations with EA.
BTW, I don’t find explaining AI safety hard to my circle (and also most of the neartermist counterarguments are wrong and I can usually set them aside). There’s also a ton of shiny objects that AI safety stuff is near or touches on (LLM, stable diffusion, etc.) that seems impressive to “normies”.
There should be a plus-sign next to it to expand the comment.
Only 1 person downvoted it (the op gives an upvote by default, and hovering over the −5 there are only 2 votes, so a high-karma user probably downvoted it)
Why create an account just to post this? You could have used a pre-existing account to post a comment to this effect, or if you are the op just edit your post to ask for more elaboration.
I think you’re imagining that the longtermists split off and then EA is basically as it is now, but without longtermism. But I don’t think that’s what would happen. If longtermist EAs who currently work on EA-branded projects decided to instead work on projects with different branding (which will plausibly happen; I think longtermists have been increasingly experimenting with non-EA branding for new projects over the last year or two, and this will probably accelerate given the last few months), EA would lose most of the people who contribute to its infrastructure and movement building.
My guess is that this new neartermist-only EA would not have the resources to do a bunch of things which EA currently does—it’s not clear to me that it would have an actively maintained custom forum, or EAGs, or EA Funds. James Snowden at Open Phil recently started working on grantmaking for neartermist-focused EA community growth, and so there would be at least one dedicated grantmaker trying to make some of this stuff happen. But most of the infrastructure would be gone.
I agree that longtermism’s association with EA has some costs for neartermist goals, but it’s really not clear to me that the association is net negative for neartermism overall. Perhaps we’ll find out.
(I personally like the core EA ideas, and I have learned a lot from engaging with non-longtermist EA over the last decade, and I feel great fondness towards some neartermist work, and so from a personal perspective I like the way things felt a year ago better than a future where more of my peers are just motivated by “holy shit, x-risk” or similar. But obviously we should make these decisions to maximize impact rather than to maximize how much we enjoy our social scenes.)
This paragraph feels pretty over the top. When you say “resources” I assume you mean that neartermist EAs wouldn’t have enough money to maintain the Forum, host EAGs, run EA Funds, etc. This doesn’t feel either that accurate, partially on the account that I don’t think those infrastructure examples are particularly resource or labour-intensive, or sufficient money is available to make them happen:
Forum: Seems like 1-2 people are working FTE on maintaining the forum. This doesn’t seem like that much at all and to be frank, I’m sure volunteers could also manage it just fine if necessary (assuming access to the underlying codebase).
EA Funds: Again, 1-2 FTE people working on this, so I think this is hardly a significant resource drain, especially since 2.5 of the funds are neartermist.
EAGs: Yes, definitely more expensive than the above two bits of infrastructure, but also I know at least one neartermist org is planning a conference (tba) so I don’t think this number will fall to 0. More likely it’ll be less than it is right now, but one could also reasonably think we currently have more than what is optimally cost-effective.
Overall it seems like you either (1) Think neartermist EA has access to very few resources relative to longtermist EA or (2) that longtermist EA doesn’t have as much direct work to spend money on so by default they spend a higher % of total funds on movement infrastructure?
For (1): I would be curious to hear more about this, as seems like without FTX, the disparities in neartermist and longtermist funding aren’t huge (e.g. I think no more than 10x different?). Given that OP / Dustin are the largest funders, and the longtermist portfolio of OP is likely going to be around 50% of OP’s portfolio, this makes me think differences won’t be that large without new longtermist-focused billionaires.
For (2): I think this is largely true, but again I would be surprised if this led to longtermist EA being willing to spend 50x more than neartermist EA (I could imagine a 10x difference). That said, a few million for neartermist EA, which I think is plausible, would cover a lot of core infrastructure.
The main bottleneck I’m thinking of is energetic people with good judgement to execute on and manage these projects.
How come you think that? Maybe I’m biased from spending lots of time with Charity Entrepreneurship folks but I feel like I know a bunch of talented and entrpreneurial people who could run projects like the ones mentioned above. If anything, I would say neartermist EA has a better (or at least, longer) track record of incubating new projects relative to longtermist EA!
I also think that the value of a nice forum, EAGs, and the EA Funds is lower for non-longtermists (or equivalently the opportunity cost is higher).
E.g. if there was no forum, and the CE folks had extra $ and talent, I don’t think they would make one. (Or EAGs, or possibly
ACEEA funds).Also, the EA fund For global health and development is already pretty much just the GiveWell All Grants Fund.
Also worth noting that “all four leading strands of EA — (1) neartermist human-focused stuff, mostly in the developing world, (2) animal welfare, (3) long-term future, and (4) meta — were all major themes in the movement since its relatively early days, including at the very first “EA Summit” in 2013 (see here), and IIRC for at least a few years before then.” (Comment by lukeprog)
I think a split proposal is more realistic on a multi-year timeframe. Stand up a meta organization for neartermism now, and start moving functions over as it is ready. (Contra the original poster, I would conceptualize this as neartermism splitting off; I think it would be better to fund and grow new neartermist meta orgs rather than cripple the existing ones with a longtermist exodus. I also think it may be better off without the brand anyway.)
Neartermism has developed meta organizations from scratch before, of course. From all the posts about how selective EA hiring practices are, I don’t sense that there is insufficient room to staff new organizations. More importantly, meta orgs that were distanced from the longtermist branch would likely attract people interested in working in GHD, animal advocacy, etc. who wouldn’t currently be interested in affiliating with EA as a whole. So you’d get some experienced hands and a good number of new recruits . . . which is quite a bit more than neartermism had when it created most of the current meta.
In the end, I think neartermism and longtermism need fundamentally different things. Trying to optimize the same movement for both sets of needs doesn’t work very well. I don’t think the need to stand up a second set of meta organizations is a sufficient reason to maintain the awkward marriage long-term.
As I’ve said before, I agree with you that this looks like a pretty good idea from a neartermist perspective.
I don’t think it’s fair to describe the current meta orgs as being created by neartermists and therefore argue that new orgs could be created by neartermists. These were created by people who were compelled by the fundamental arguments for EA (e.g. the importance of cause prioritization, cosmopolitanism, etc). New meta orgs would have to be created by people who are compelled by these arguments but also not compelled by the current arguments for longtermism, which is empirically a small fraction of the most energetic/ambitious/competent people who are compelled by arguments for the other core EA ideas.
I think this is the strongest argument for why neartermism wouldn’t be substantially weaker without longtermists subsidizing its infrastructure.
Two general points:
There are many neartermists who I deeply respect; for example, I feel deep gratitude to Lewis Bollard from the Open Phil farmed animal welfare team and many other farmed animal welfare people. Also, I think GiveWell seems like a competent org that I expect to keep running competently.
It makes me feel sad to imagine neartermists not wanting to associate with longtermists. I personally feel like I am fundamentally an EA, but I’m only contingently a longtermist. If I didn’t believe I could influence the long run future, I’d probably be working on animal welfare; if I didn’t believe that there were good opportunities there, I’d be working hard to improve the welfare of current humans. If I believed it was the best thing to do, I would totally be living frugally and working hard to EtG for global poverty charities. I think of neartermist EAs as being fellow travelers and kindred spirits, with much more in common with me than almost all other humans.
While neartermists may be a “small fraction” of the pie of “most energetic/ambitious/competent people,” that pie is a lot larger than it was in the 2000s. And while funding is not a replacement for good people, it is (to a point) a force multiplier for the good people you have. The funding situation would be much better than it was in the 2000s. In any event, I am inclined to think that many neartermists would accept B-list infrastructure if that meant that the infrastructure would put neartermism first -- so I don’t think the infrastructure would have to be as good.
I’m just not sure if there is another way to address some of the challenges the original poster alludes to. For the current meta organizations to start promoting neartermism when they believe it is significantly less effective would be unhealthy from an epistemic standpoint. Taking the steps necessary to help neartermism unlock the potential in currently unavailable talent/donor pools I mentioned above would—based on many of the comments on this forum—impair both longtermism’s epistemics and effectiveness. On the other hand, sending the message that neartermist work is second-class work is not going to help with the recruitment or retention of neartermists. It’s not clear to me what neartermism’s growth (or maintenance) pathway is under current circumstances. I think the crux may be that I put a lot of stock in potentially unlocking those pools as a means of creating counterfactual value.
I understand that a split would be sad, although I would view it more as a sign of deep respect in a way—as an honoring of longtermist epistemics and effectiveness by refusing to ask longtermists to compromise them to help neartermism grow. (Yes, some of the reason for the split may have to do with different needs in terms of willingness to accept scandal risk, but that doesn’t mean anyone thinks most longtermists are scandalous.)
I broadly agree.
My guess would be that the people who want an EA-without-longtermism movement would bite that bullet. The kind of EA-without-longtermism movement that is being imagined here would probably need less of those things? For example, going to EAG is less instrumentally useful when all you want is to donate 10% of your income to the top recommended charity by GiveWell, and more instrumentally useful when you want to figure out what AI safety research agenda to follow.
Like, do you really think this is a characterization of non-longtermist activities that suggests to proponents of the OP, that your views are informed?
(In a deeper sense, this reflects knowledge necessary for basic cause prioritization altogether.)
Donating 10% of your income to GiveWell was just an example (those people exist, though, and I think they do good things!), and this example was not meant to characterize non-longtermists.
To give another example, my guess would be that for non-longtermist proponents of Shrimp Welfare EAG is instrumentally more useful.
To be clear, I might not be what I seem in more than one way. I also don’t want to dig into this too far, but I’m not sure some people appreciate the level of “neartermist” sentiment aligned with the OP.
Well, no, your comment isn’t true.
I think we need to zoom out here, as you might be directly making the case for the OP?
Zooming out, there’s a 5-10 page essay here about the feelings of neartermists, who over the last 2+ years, are not super thrilled about seeing people talent change over to LT causes, and potential donors intercepted.
But just over the past few months, the neartermists, who have never heard of or would associate with Michael Vassar, who have no interest in discussing HBD, do not have claim to FTX money, have just been seared. (Also, the “castle”, Vice articles about Neo-Nazis, etc. but I can’t figure out how to work them in here, this is too long). This is made worse by the quality and level of discussion, such as giant, unnecessary threads into the concept of weirdness and personal lifestyles. Neartermists have no interest in judging or litigating most of these issues. They “can’t count that low”.
If the neartermists believe they are exposed, while you’re explaining the LT are sort of evading this by being, well resourced, non-labeled EA orgs, what’s being said here?
Increasingly? Which actual well esteemed new LT org is explicitly EA labelled? I can’t easily think of any. There is literally nothing besides CEA in the grants?
https://www.openphilanthropy.org/grants/?q=&focus-area%5B%5D=longtermism
But this is exactly what the OP is saying and wants to solve. Unless the strategy here is implicitly hold neartermist “hostage”, well, yes, exactly, the neartermists want the similar resources to get these things, and escape this situation you sort of laid out succinctly.
Look, your Redwood Research and Paul’s ARC are two strong object level safety orgs[1]. How many other object level orgs are there doing good work in AI safety?
Apart from you, orgs like LessWrong and MIRI, are marginal, even in their worldview. What has MIRI’s output been for the last 3 years? These highly knit groups can’t stop some of their own community moving into two lush, for-profit organizations that are associated with EA. One or more of these orgs are involved in, if not literally spawned, a new LLM arms race.
Who is John Carmack? What did he decide to do after reading Superintelligence?
For a host of reasons, we both know the T in ITN, could be a big topic for a debate.
Is the ellipsis that people are choosing neartermist areas based on social circles?
But isn’t the truth sort of exactly the opposite? I think if we studied the social graph and lifestyle choices, the kinds of we would find most neartermists have looser affiliations with EA.
BTW, I don’t find explaining AI safety hard to my circle (and also most of the neartermist counterarguments are wrong and I can usually set them aside). There’s also a ton of shiny objects that AI safety stuff is near or touches on (LLM, stable diffusion, etc.) that seems impressive to “normies”.
IMO the biosecurity orgs could fit in this purported new movement naturally.
Think about why this comment is downvoted? We can’t even see it.
There should be a plus-sign next to it to expand the comment.
Only 1 person downvoted it (the op gives an upvote by default, and hovering over the −5 there are only 2 votes, so a high-karma user probably downvoted it)
Why create an account just to post this? You could have used a pre-existing account to post a comment to this effect, or if you are the op just edit your post to ask for more elaboration.
For mainly point 3, I’ve downvoted your post