Mati_Roy
I agree with what Kit said as well.
But that the only reason you’re not removing it is because of Kit’s comment makes me pretty concerned about the forum.
I also disagree that private communication is better than public communication in cases like this.
Alignment Newsletter Podcast: http://alignment-newsletter.libsyn.com/
I also feel similarly. Thanks for writing this.
Points I would add:
-This organisation could focus on supporting local LessWrong groups (which CFAR isn’t doing).
-This organisation could focus on biases that make people shift in a better direction rather than going in the same direction faster. For example, reducing the scope insensitivity bias seems like a robust way to make people more altruistic, whereas improving people’s ability to make Trigger-Action-Plans might simply accelerate the economy as a whole (which could be bad if you think that crunches are more likely than bangs and shrieks, as per Bostrom’s terminology).
-The organisation might want to focus on theories with more evidence (ie. be less experimental than CFAR) to avoid spreading false memes that could be difficult to correct, as well as being careful about idea inoculations.
(why) do you focus on near-term animal welfare and poverty alleviation?
Community norm—proposal: I wished all EA papers were posted on the EA Forum so I could see what other EAs thought of them, which would help me decide whether I want to read them.
Short-termism is to longtermism what longtermism is to infinitarianism.
I’m not sure where to write this, so I’ll write here for now.
It seems like Google doesn’t properly index articles posted on this forum, which seems problematic. It makes it harder for me to retrieve articles I read for example, and also makes it harder to discover new posts although this problem is more invisible.
Any query I do with “site:forum.effectivealtruism.org″ never links to articles directly, but only to other pages like user pages.
I just skimmed. I like the idea. That plausibly only works for sufficiently memetically fit charities (which varies based on the product/service), but that’s probably still a significant number.
If this is thesis is true, then it means donors (of charity that have companies with a consumer based that is memetically fit for that charity) should buy companies and transform them into Profit for Good (at least assuming they’re able to hire a good CEO, and maybe providing other incentive-based pay for the CEO if they don’t have shares), because this would increase the valuation of the company, and so increase their donation impact.
Running the organization directly as a charity would also have other advantages, notably tax benefits and unlimited H1B visas.
I’ll share on The Economics of Doing Good (Effective Altruists) and add on Cause Prioritization Wiki.
I think it would be good to differentiate things that are instrumental to doing EA and things that are doing EA.
Ex.: Attending events and reading books is instrumental. Working and donating money is directly EA.
I would count those separately. Engagement in the community is just instrumental to the goal of EA movement building. If we entengle both in our discussions, we might end up with people attending a bunch of events and reading a lot online, but without ever producing value (for example).
Although maybe it does produce value in itself, because they can do movement building themselves and become better voters for example. And focusing a lot on engagement might turn EA into a robust superorganism-like entity. If that’s the argument, then that’s fine I guess.
Somewhat related: The community’s conception of value drifting is sometimes too narrow.
Every once in a while, I see someone write something like “X is neglected in the EA Community”. I dislike that. The part about “in the EA Community” seems almost always unnecessary, and a reflection of a narrow view of the world. Generally, we should just care about whether X is neglected overall.
- 18 Jul 2020 22:51 UTC; 5 points) 's comment on The EA movement is neglecting physical goods by (
I created a wiki page because it’s easier to maintain an up-to-date repository like this.
I took all the links on this page, removed broken links and added some new links.
Here it is: https://causeprioritization.org/Donating_now_vs_later
Part-time remote assistant position
My assistant agency, Pantask, is looking to hire new remote assistants. We currently work only with effective altruist / LessWrong clients, and are looking to contract people in or adjacent to the network. If you’re interested in referring me people, I’ll give you a 100 USD finder’s fee for any assistant I contract for at least 2 weeks (I’m looking to contract a couple at the moment).
This is a part time gig / sideline. Tasks often include web searches, problem solving over the phone, and google sheet formatting. A full description of our services are here: https://bit.ly/PantaskServices
The form to apply is here: https://airtable.com/shrdBJAP1M6K3R8IG It pays 20 usd/h.
You can ask questions here, in PM, or at mati@pantask.com.
On handling posts that may violate Forum rules:
Thanks for the clarifications.
On private vs. public communication:
I don’t want to argue for what to do in general, but here in particular my “accusation” consists of doing the math. If I got it wrong, am sure other got it wrong too and it would be useful to clarify publicly.
On that note, I’ve sent this post along to Lucius of the GivingMultiplier team.
Thank you.
I wonder about the risks of optimising for persuasive arguments over accurate arguments. I feel like it’s a negative-sum game, and will result in everyone (most people) having a worse model of the world, and that we should have a strong norm against that. Some people have done this for arguments for donating, so maybe you want to update a bit against donating to balance this out: https://schwitzsplinters.blogspot.com/2020/06/contest-winner-philosophical-argument.html
On the other hand, I sometimes want to pay people to change my mind to incentivize finding evidence. A good example is paying for arguments that lead someone to revoke their cryonics memberships, hence making them save money: https://www.lesswrong.com/posts/HxGRCquTQPSJE2k9g/i-will-pay-usd500-to-anyone-who-can-convince-me-to-cancel-my Although if I did that, I would likely also have a bounty for arguments to spend resources for life extension interventions.
So maybe 2 crucial differences are:
a) whether the recipient of the argument is also the one paying for it or otherwise consenting / aware of what’s going on
b) there’s a bounty on the 2 sides
- 27 Aug 2020 17:22 UTC; 5 points) 's comment on Study results: The most convincing argument for effective donations by (
This gave me the idea of The Bullshit Awards
Archive.org doesn’t seem to be archiving articles from the new forum properly, which would be really valuable in my opinion. When I consult a post on Archive.org, it briefly shows up, and then the post disappears, and all I see is “Sorry, we couldn’t find what you were looking for.” For example: https://web.archive.org/web/20181113212118/https://forum.effectivealtruism.org/posts/4r3ZpiEoWft62yPwv/crohn-s-disease.
The 3 images are now broken
The Global Challenges Foundation ( https://globalchallenges.org/ ) has a lot of information really relevant for EAs. They care about improving cooperation among countries to reduce global catastrophic risks. I strongly recommend adding them in those updates. They have a newsletter.
Hi all, Haydn and I figured this post was a good place to plug our startup, Pantask. While the services we provide are not as advanced as those listed here, Pantask can offer assistance to EA orgs that need help with day to day operations but can’t afford to hire full time employees. We provide general virtual assistance services, such as organizing chaotic troves of data, manage schedules and emails, and help with brain debugging. We also offer graphic design, copyediting, transcription, and writing services. Our assistants can also perform certain kinds of research (the kind you can do in <8 hours, generally speaking), such as finding service providers, information on grants, etc.
Essentially, if the task can be done by a reasonably competent person without a specialized skill set, our assistants can very likely do it for you. In addition to being EA owned, some of our assistants are also EAs and even more are familiar and interested in EA. We’ve served EA charities before. We charge 30 USD per hour. If you’re not used to delegating tasks, we can help you review the tasks you delegate to make sure they are clear, at no additional cost.
You can send tasks to ask@pantask.com, or email either of us at mati@pantask.com or haydn@pantask.com, or call us at (570) 509-3366. You can also schedule me on: https://calendar.google.com/calendar/u/0/appointments/schedules/AcZssZ0Dc0qvV3EbGsGR39_dhoeusVtX6rwnpfXpGVHwRHPGPuIjTd1GPiCRz9qMwTkIZKCPPVB0AQQm
I vote for having one remote EAG every year. This is great as far as I’m concerned!