My evidence is just hearsay, anecdotes, and people that I’ve talked to
Once again, I don’t think we should be alarmist about this
Well, it’s pretty clear you said that you read this on the internet:
On many internet circles, there’s been a worrying tone. “You should apply for [insert EA grant], all I had to do was pretend to care about x, and I got $$!” Or, “I’m not even an EA, but I can pretend, as getting a 10k grant is a good instrumental goal towards [insert-poor-life-goals-here]” Or, “Did you hear that a 16 year old got x amount of money? That’s ridiculous! I thought EA’s were supposed to be effective!” Or, “All you have to do is mouth the words community building and you get thrown bags of money.”
Your “boulder splash” is annoying. It pushes on the very issue and the adverse affects you claim to worry about. Noise and heat is self perpetuating. This transition into a higher funding environment is delicate, and outcomes depend on initial conditions and the liminal states. Reactions and confidence from the community and nascent leaders is important and benefits from a firm hand and the right tone.
I don’t think this is malice, but it’s clumsy and emotionally manipulative.
Yeah. I agree. Pointing to this problem can make the problem worse. It’s a little bit of an info-hazard in that respect? But yeah, I’ll agree it was slightly clumsy. I wanted to tell everyone that this was a thing that was happening, without creating a backlash that would destroy the genuinely valuable parts of doing what were doing. Furthermore, it is genuinely really valuable to have such a high trust community and I don’t want that to change. I guess whether or not I succeeded on walking this tightrope is for others to decide.
One of the devices not mentioned in my comment below is the utility of LessWrong as a filter/proxy for values. This can work but has a weakness because institutional literacy and intellectual honesty isn’t at the right aesthetic. You’ve demonstrated that with your post and your comment, which is poetic.
I know someone who has been trying to work on the problem (which isn’t very well elaborated on in your post) with sort of four arms:
Show, not tell, instances of the issue you worry about to increase literacy about it (without getting defunded)
Show, not tell, issues with intellectual honesty and subcultures (without getting defunded)
Setup institutions and mechanisms to solve the issue
Increase literacy about the design of EA and how funding is decided
It turns out this project is pretty hard.
The money thing itself isn’t that hard. “Meta-AI” stuff in business is everywhere. What’s tricky is showing not telling, and handling the cause area activism/proxy and consequent issues. If you’re trying to stand in multiple cause areas, which is necessary, it’s an absurd situation right now and unfair to work in.
Well, it’s pretty clear you said that you read this on the internet:
Your “boulder splash” is annoying. It pushes on the very issue and the adverse affects you claim to worry about. Noise and heat is self perpetuating. This transition into a higher funding environment is delicate, and outcomes depend on initial conditions and the liminal states. Reactions and confidence from the community and nascent leaders is important and benefits from a firm hand and the right tone.
I don’t think this is malice, but it’s clumsy and emotionally manipulative.
Yeah. I agree. Pointing to this problem can make the problem worse. It’s a little bit of an info-hazard in that respect? But yeah, I’ll agree it was slightly clumsy. I wanted to tell everyone that this was a thing that was happening, without creating a backlash that would destroy the genuinely valuable parts of doing what were doing. Furthermore, it is genuinely really valuable to have such a high trust community and I don’t want that to change. I guess whether or not I succeeded on walking this tightrope is for others to decide.
One of the devices not mentioned in my comment below is the utility of LessWrong as a filter/proxy for values. This can work but has a weakness because institutional literacy and intellectual honesty isn’t at the right aesthetic. You’ve demonstrated that with your post and your comment, which is poetic.
I know someone who has been trying to work on the problem (which isn’t very well elaborated on in your post) with sort of four arms:
Show, not tell, instances of the issue you worry about to increase literacy about it (without getting defunded)
Show, not tell, issues with intellectual honesty and subcultures (without getting defunded)
Setup institutions and mechanisms to solve the issue
Increase literacy about the design of EA and how funding is decided
It turns out this project is pretty hard.
The money thing itself isn’t that hard. “Meta-AI” stuff in business is everywhere. What’s tricky is showing not telling, and handling the cause area activism/proxy and consequent issues. If you’re trying to stand in multiple cause areas, which is necessary, it’s an absurd situation right now and unfair to work in.