Upvoted because â50 karma strikes me as excessive for a joke (even if in poor taste)
leillustrationsđ¸
[Question] What do you view as core EA prinÂciÂples?
Presentations from any of the individuals who work on evaluation, getting âinto the weedsâ of how decisions are made, and recent work
Presentations from Givewell grantees on what theyâre currently working on
Bill /â Melinda Gates, or otherwise someone from the Gates foundation
Elon Musk, or people from Tesla, Neuralink, and SpaceX
People from pharmaceutical companies
Board members of EVF
Sal Khan
A talk from successful edutainment/âsocial media people who discuss EA-adjacent ideas like CGP Grey, Kurzgesagt, etc. (who did not necessarily start out EA-funded)
Podcast interviewers who discuss EA-relevant content, eg. Ezra Klein (as already mentioned), Lex Fridman, Joe Rogan.
People running non-cause area EA interest groups, eg. SEADS, High Impact [Engineers, Law, Professionals, Medicine, etc], Religious EA groups, on what theyâre working on/âhow EA is different in their communities
I suspect you would get a much wider applicant pool for EAGxSingapore if it were a week later.
The time requirements (<10 hours/âweek for most roles for most of the process, then full-time the week of the conference) is not really viable for most working professionals, and more suited to students who would be on winter breakâbut it looks like NUS (Singapore), Ateneo de Manila University and De La Salle University (Phillipines), and Fulbright University (Vietnam), ie. the (I think) majority of the EA university groups in South East Asia, have school terms going up to the week of the conference.
:)
If we are correct about the risk of AI, history will look kindly upon us (assuming we survive).
Perhaps not. It could be more like Y2K, where some believe problems were averted only by a great deal of effort and others believe there would have been minimal problems anyway.
I sometimes downvote comments and posts mostly because I think they have âtoo muchâ karmaâcomments and posts I might upvote or not vote on if they had less karma. As I look at the comment now it has 2 karma with 11 votesâmaybe at some point it had more and people voted it back to 2?
I would have downvoted this comment if it had more karma because I think Deborahâs comment can be read as antagonistic: âutterly blindâ, âdire stateâ, âfor heavenâs sake!â, calling people ignorant. In this context I didnât read it this way, but I often vote based on âwhat would the forum be like if all comments were more like thisâ rather than âwhat intentions do I think this person hasâ.
Hi Deborah, I also disagree with this comment (and have disagree voted but not downvoted it). Here are some of my reasons:
Without getting too much into it, I think the concerns with the population growth/âtechnological change trend are somewhat distinct from problems relating to the current population size of the earth. One can be concerned that the population replacement rate is dropping too fast while also thinking that the current global population is too large.
I think that, while the summarised breakdown you have under the overpopulation project link you have can be understood as broadly true (the specific link is broken), its imprecise, and the real picture is much more complicated. My understanding is that many estimate the carrying capacity of the earth to be 10 billion. If this estimate is true, then âlarge population, ecological sustainability and high human developmentâ is possible if we define âlarge populationâ to be â8-10 billion peopleâ and the other two factors in the same way that those who made the estimates defined them. I also think this picture does not consider the micro effects of aging populations, and papers over the important fact that the welfare of people in the least developed areas is not bottlenecked by planetary boundaries but the distribution of resources. Many effective altruists (myself included) also take a longtermist view which looks to expand sentient life beyond the earth.
You present biodiversity and âbalanceâ as ultimate goals, while I primarily think of the former as an instrumental goal and the latter as often ill-defined.
Iâm concerned about the long-run effects of the people most concerned with these issues collectively choosing to not have children. See discussion here.
Itâs not clear that human incursion into animal habitats is net-negative for wild animal welfare. See discussion here.
I also think it is unfair to call the post âutterly blind to the dire state of the biosphere and the existential risks we are creating for our species by pushing beyond the planetary boundariesâ. Rather I think these concerns are outside the scope of this post.
I think having a separate section for community posts has greatly improved my experience of the forum. However I think there are still quite a lot of posts that stay on the front page for a long time for similar reasons to why community posts didâbecause they â[interest] everyone at least a little bitâ and/âor are âaccessible to everyone, or on topics where everyone has an opinionâ.
I want to see posts that do things like present the results of significant work get more attention, and to a lesser extent posts that are topicalâi.e. announcements about recent news, events, and achievements also get more attention. I think these posts suffer from not having either of the above properties.
Could have filters/âtags that promote these posts?
Elbert HubÂbardâs A MesÂsage to Garcia
I imagine there could be a useful office in a city with ~20 people using it regularly and ~100 people interested enough in EA to come to some events, and I wouldnât think of that city as an âEA hubâ.
I also think eg. a London office has much more value than an eg. an Oxford or Cambridge office (although I understand all three to be hubs), even though Oxford and Cambridge have a higher EA-density.
located in an existing hub so that program participants have plenty of people outside the program to interact with
I donât understand this consideration. It seems to me that people located in a place with a more robust existing community are the people that would counterfactually benefit the least from a place to interact with other EAs, because they have plenty of opportunities to do so already.
Iâm assuming by âhubâ you mean âEA hubâ, but if by âhubâ you mean âa place with high population density/âotherwise a lot of people to talk toâ, then this makes sense.
(Full disclosure: I was a grantee of CEEALAR last year; but Iâm thinking about this in the context of non-residential office/âco-working spaces like Meridian Office).
Can you say more precisely what it means for a fund to be recommended? For instance, how should a donor compare giving to one of the ârecommended fundsâ to giving to a specific charity or project directly? (and by extension one of GWWCâs new funds over a specific charity)
How did you choose the set of evaluators to evaluateâfor instance, why evaluate LTFF and LLF over FPâs GCR fund? Were there other evaluators considered for the process but not evaluated?
Itâs kind of jarring to read that someone has been banned for âviolating a normââthat word to me implies that theyâre informal agreements between the community. Why not call them ârulesâ?
tabforacauseâa browser extension which shows you ads and directs ad revenue to charityâhas launched a way to set GiveDirectly as the charity you want to direct ad revenue to.
It doesnât raise a lot of money per tab opened, obviously, but Iâm not using my newtab page for anything else and find the advertising unobtrusiveâits in the corner, not taking up the whole screenâif. youâre like me in these respects it could be something to add.
Fixed now, thanks for flagging!
This is great! I think its extremely important and underrated (dare I say âneglectedâ?) work to identify and shift resources towards more effective charities in smaller contexts, even if those charities are unlikely to be the most globally effective.
Are you able to share more of your analysis or data? Iâm curious about the proportion of charities in the categories you identify above, and what, if any numerical/âcategorical values you assigned.