pretraining data safety; responsible AI/ML
yz
Appreciate the post. https://www.pewresearch.org/social-trends/2020/01/09/trends-in-income-and-wealth-inequality/ This in-depth research article suggest the rich are getting richer faster, and suggest “Economic inequality, whether measured through the gaps in income or wealth between richer and poorer households, continues to widen.” It matches with your intuition.
I wonder what could be done to really incentive the powerful/high income people to care about contributing more.
Thanks for the thoughtful and organized feedbacks; I have to say I share very similar views after the intro to EA course—it seems to me back then that EA has a lot of subgroups/views. Appreciate the write up which probably spoke up for many more people!
With long timeline and less than 10% probability: Hot take is these are co-dependent—prioritizing only extinction is not feasible. Additionally, does only one human exist while all others die count as non-extinction? What about only a group of humans survive? How should this be selected? It could dangerously/quickly fall back to Fascism. It would only likely benefit the group of people with current low to no suffering risks, which unfortunately correlates to the most wealthy group. When we are “dimension-reducing” the human race to one single point, we ignore the individuals. This to me goes against the intuition of altruism.
I fundamentally disagree with the winner-take-all type of cause prioritization—instead, allocate resources to each area, and unfortunately there might be multiple battles to fight.
To analyze people’s responses, I can see this question being adjusted to consider prior assumptions: 1. What’s your satisfaction on how we are currently doing in the world now? What are the biggest gaps to your ideal world? 2.What’s your assessment of timeline + current % of extinction risk due to what?
Some example of large scale deepfakes that is pretty messed up: https://www.pbs.org/newshour/world/in-south-korea-rise-of-explicit-deepfakes-wrecks-womens-lives-and-deepens-gender-divide
Other examples on top of my head is the fake Linkedin profiles.
Not sure how to address the question otherwise; a thought is there might be deepfakes that we cannot detect/tell being deepfakes yet.
It is so sad to see the “humans are creating suffering for humans” amplified right now
It also worries me, in the context of marginal contributions, when some people (not all) start to think of “marginal” as a “sentiment” rather than actual measurements (getting to know those areas, the actual resources, and the amount of spending, and what the actual needs/problems may be) as reasoning for cause prioritization and donations. A sentiment towards a cause area, does not always mean the cause area got the actual attention/resources it was asking for.
Interested in the human welfare intervention program! Could I reach out by DM to ask for the name? Also totally understand if you are hesitant to provide name as well. Thanks!
thanks!
This is a Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked. Commenting and feedback guidelines:
Keep one and delete the rest (or write your own):I’m posting this to get it out there. I’d love to see comments that take the ideas forward, but criticism of my argument won’t be as useful at this time.
This draft lacks the polish of a full post, but the content is almost there. The kind of constructive feedback you would normally put on a Forum post is very welcome.
This is a Forum post that I wouldn’t have posted without the nudge of Draft Amnesty Week. Fire away! (But be nice, as usual)
I find it surprising when people (people in general, not EA specific) do not seem to understand the moral perspective of “do no harm to other people”. This is confusing to me, and I wonder what aspects/experiences contributed to people being able to understand this vs people not being able to understand this.
Great initiative; thanks! Would”This is a Draft Amnesty Week draft.” also apply to quick notes as well?
I think that is a reasonable path; SWE/ML will give you a good foundation anyways in early career if you want to switch to AI safety later as you build experience. Additionally, something in security is a good idea as well.
yz’s Quick takes
From some expressions on extinction risks as I have observed—extinction risks might actually be suffering risks. It could be the expectation of death is torturing. All risks might be suffering risks.
Read some other comments, and career coaching from 80k sounds like a good suggestion!
Some other thoughts:
if you have an area that you care about already, you could do active work in this area
if you have not identified an area yet, AND it does not matter to you to work on things you care less about, maybe find something interesting and relatively high paying for financial purposes and make monetary donations, or volunteering if you have time
I believe there are all kinds of combinations of the above!
A few thoughts
Prohibition is a very US based concept for a specific era, and thus the env setting for this thought experiment will have to match the same level of context/available information back then in the US.
There should also be disentanglement of policy intention and the actual way of doing executing the intention. For example banning alcohol at work place (in specific conditions basically, others includes driving), or risk of losing jobs if drinking at work, or limit max level of alcohol % could all be better approaches
I don’t know if EA actions is the ground truth of correct action or not, which is not what the question is about anyways, but just a thought (Prohibition may be an ineffective idea independent of EA values)
I previously did some work on model diffing (base vs chat models) on llama2, llama3 and mistral (as they have similar architectures) for the final project of AISES(https://www.aisafetybook.com/), and found some interesting patterns;
https://docs.google.com/presentation/d/1s-ymk45r_ekdPAdCHbX1hP5ZaAPb82ta/edit#slide=id.p3
Planning to explore more and expand ; Welcome any thoughts/comments/discussions
Could you explain how is abortion different in non democratic cultures in your opinion?
I see generally this may be good, but there are cases that require more socially aware education to be discussed. Additionally, this discussion seems to be from a view that is unfortunately only negatively affect or restrict half of the humans; it seems to be easy for the humans who are not affected to discuss on restricting; the barrier is unfairly lower unfortunately by human nature. I do think writers need to bear some responsibility for knowledge/background learning
“democratic culture” → could you elaborate why this is a cultural thing?
I take it as about equality
Could you say a bit more about what you want to do about the draft? I assume you would want to criticize/comment on “privileging the fortunate”? And what goals you want to achieve the draft? Thanks!