# evelynciara

Karma: 2,746

I’m a machine learning engineer on a team at PayPal that develops algorithms for personalized donation recommendations (among other things). Before this, I studied computer science at Cornell University. I also manage the Effective Public Interest Computing Slack (join here).

Obligatory disclaimer: My content on the Forum represents my opinions alone and not those of PayPal.

I also offer copyediting and formatting services to members of the EA community for $15-35 per page, depending on the client’s ability to pay. DM me for details. I’m also interested in effective altruism and longtermism broadly. The topics I’m interested in change over time; they include existential risks, climate change, wild animal welfare, alternative proteins, and longtermist global development. A comment I’ve written about my EA origin story Pronouns: she/​her, ella, 她, 彼女 Links: “It is important to draw wisdom from many different places. If we take it from only one place, it becomes rigid and stale. Understanding others, the other elements, and the other nations will help you become whole.” —Uncle Iroh • I think some of us really need to create op-eds, videos, etc. for a mainstream audience defending longtermism. The Phil Torres pieces have spread a lot (people outside the EA community have shared them in a Discord server I moderate, and Timnit Gebru has picked them up) and thus far I haven’t seen an adequate response. # [Question] Should U.S. donors give to an EA-fo­cused PAC like Guard­ing Against Pan­demics in­stead of in­di­vi­d­ual cam­paigns? 20 May 2022 23:49 UTC 8 points 6 comments1 min readEA link • Good post! I’ll take another look later. Nitpick: Utilitarianism has tenets, not tenants. • I’ve had an idea like this before! In my concept, the user would select the criteria that they value, and the app would only show them the places that are Pareto optimal with respect to those criteria. • Are shortforms supposed to show up on the front page? I published a shortform on Sunday and noticed that it did not appear in the recent activity feed, but older material did. Also, does anyone else think that the shortform section should be more prominent? It’s a nice way to encourage people to publish ideas even if they’re not confident in them, but my most recent one has gotten little to no engagement. • Big O as a cause prioritization heuristic When estimating the amount of good that can be done by working on a given cause, a good first approximation might be the asymptotic behavior of the amount of good done at each point in time (the trajectory change). Other important factors are the magnitude of the trajectory change (how much good is done at each point in time) and its duration (how long the trajectory change lasts). For example, changing the rate of economic growth (population growth * GDP/​capita growth) has an trajectory change in the short run, as long as humanity doesn’t expand into space. We break it down into a population growth component, which grows linearly, and a component for the change in average welfare due to GDP per capita growth[1]. GDP per capita typically grows exponentially, and welfare is a logarithmic function of GDP per capita. These two trends cancel out, resulting in a linear trend, or . If humanity expands into space, then economic growth becomes a quartic trend. Population growth becomes cubic, since humanity can fill an amount of space in time.[2] The average welfare due to GDP per capita growth is still linear. Multiplying the cubic and linear trends yields a quartic trend. So increasing the probability that space colonization goes well looks more important than changing the rate of economic growth on Earth. Surprisingly, the trajectory change caused by reducing existential risk is not an exponential trend. Existential risk reduces expected welfare at each point in time by a factor of , where is the probability of catastrophe per unit of time. This exponential trend causes all trajectory changes to wash out as becomes very large. The amount of good done by reducing x-risk from to is , where is the welfare trajectory conditional on no existential catastrophe. The factor increases to a maximum value and then decays to 0 as approaches infinity, so its asymptotic behavior is . So the trajectory change caused by reducing x-risk is , or whatever the asymptotic behavior of is. On the other hand, the amount of good done by changing the trajectory from to is . So if you can change the trajectory to a that will grow asymptotically faster than , e.g. by colonizing space, then this may be more important than reducing x-risk while holding the trajectory constant. 1. ^ Change in GDP per capita may underestimate the amount of welfare created by technological progress. I’m not sure if this makes the change in average welfare a super-linear growth trend, though. 2. ^ # [Question] Are there good in­tro­duc­tory ma­te­ri­als that ex­plain how char­ity eval­u­a­tion works? 6 May 2022 3:03 UTC 25 points 6 comments1 min readEA link • I’m willing to do this work for$15-35 per page (depending on the author’s ability to pay); I’m very detail-oriented and like this kind of stuff. I can only really copyedit/​proofread posts written in American English, but I can do formatting for any text. I could probably do one or two copyediting or formatting jobs per weekend.

• Another type of service that would be useful is accessibility services, such as writing transcripts/​timed text for audio and video (e.g. podcasts) and alt text for images.

• That is, for those who want to use their real names on the forum.

It’s also sometimes hard for me to parse other people’s usernames into first name and last name components when they’re all lowercase. I’ve gotten it wrong a couple times.

# [Question] What fac­tors make EA out­reach at a school valuable?

26 Apr 2022 6:07 UTC
10 points
• I think the main reason that nuclear power is so expensive is that it is subject to extremely strict safety standards that go beyond the regulations imposed on other energy sources, including fossil fuels. Many of these regulations are actually unnecessary to keep people safe; governments should repeal them and reform their regulatory frameworks to achieve the desired tradeoff between cost and safety. There are also advanced nuclear technologies that seem to reduce costs while improving safety, such as certain small modular reactor designs. Maybe with these regulatory and technological advances, nuclear will look more affordable.

• I ran the numbers based on the Clearer Thinking exercise and found a rate at which I’d trade money for time that I was comfortable with (in economics, this is called the marginal rate of substitution). It’s based on the amount of money you earn for each hour you work, after taxes, and for me it’s about US $50/​hr. However, the rule of thumb is only theoretically grounded if you are paid by the hour. I’m salaried, so I am not expected to work a particular number of hours per week, and I can’t earn$50 more for each additional hour I work. I’m still willing to apply this number in my daily life because it “feels right”.

• Excellent question!

Expected value is the general notion of the average-case outcome of a random variable. In probability theory, the expected value of a random variable, , is the average of its possible values weighted by their probabilities.

Expected utility is the expected value of a utility function. Since “value” is a synonym for utility, I often use “expected value” and “expected utility” interchangeably, especially when the ethics/​EA context is understood.

# [Question] What moral philoso­phies be­sides util­i­tar­i­anism are com­pat­i­ble with effec­tive al­tru­ism?

16 Apr 2022 19:53 UTC
18 points