Czech EA, currently (10/2022) taking a few months long vacations after 5 years working as a software developer. After vacation, I want to find EA aligned job.
fun bit: I competitively play combat juggling.
Czech EA, currently (10/2022) taking a few months long vacations after 5 years working as a software developer. After vacation, I want to find EA aligned job.
fun bit: I competitively play combat juggling.
I understand the sentiment, but for me it would be dangerously close to moral offsetting, which I am not fan of. E.g. animal welfare activist might say “eating factory farm meat in random restaurants saves me some time, which I am using to help animals”. Or “eating meat is okay, because I am saving much more animals by donating to ACE recommended charities or something”.
I think naive math checks out, but slightly broader conception of ethics would discourage such reasoning.
But to be more constructive, I subscribed to Antrophic’s Claude. Claude Opus should be on similar level as GPT4 and I have much more trust in the company (but I am ready to abandon them if they fail to live up my safety expectations).
But there is a lot of smaller companies that can do a lot with smaller models like Llama, like https://www.phind.com/, mainly targeted for coding. Companies like those don’t have huge AI ambitions, they are just (very well) leveraging current technology provided by big players to make an useful product.
does anyone know what is the reasoning behind naming change (from AGI to AI safety fundamentals)?
It is current CEA policy to mostly ignore critical bad faith content, see 2nd faq in https://forum.effectivealtruism.org/posts/zgHWeMBPnCMvdoZvz/ea-will-likely-get-more-attention-soon . I would love to see more thourought explanation of that position though.
any ETA on finished design? or a way how to make sure not to miss it? :)
EAs should be aware that it is easy to traumatize (their own) kids with apocalyptic narratives.
EAs population is getting older: we are no longer in our early twenties but at a more respectable age. That alone will lead to more and more children being born to EAs. MacAskill recommends in WWOTF having children. Not sure how many babies will his endorsement ensure, but likely nonzero.
Raising a child is hard, but EAs have probably one additional opportunity to mess it up a bit more than the average parent. Since we believe the world might end if we don’t do enough, we might traumatize our children with that.
Many, many apocalyptic movements do that. On the religious side, you have cults like (Jehovah’s witnesses)[https://medium.com/fearless-she-wrote/religious-trauma-is-real-and-more-survivors-need-to-speak-up-about-it-39429c677756] and (mainstream religions)[https://www.theguardian.com/world/2016/apr/05/religion-evangelical-christian-apocalypse-josiah-hesse] with their Armageddons (i am not saying EA is a cult, I am saying there is a similarity and potential risk in this specific thing, nothing more). Climate change movement has its (own share )[https://blogs.bmj.com/bmj/2020/11/19/our-children-face-pretraumatic-stress-from-worries-about-climate-change/]
Damaging life-long consequences Chronic stress can permanently affect brain development as well as brain functioning. Exposure to climate-related stress during early development may have damaging life-long consequences, including maladaptive behaviors, memory problems, problems with attention, diminished inhibition, difficulty regulating emotions, impaired decision making, impaired problem solving, behavioral problems, and priming for future stressful events...
This is not inevitable. Climate movement produced guides (e.g. (this one)[https://media.ourkidsclimate.org/2021/06/Talk-about-climate-guide-for-parents-2021-06-01.pdf] which is very similar to other lists when googling “how to talk to kids about climate change”) and probably more material about this topic. (which I know nothing about)
this is literally just random though I had which I haven’t seen discussed yet. I spent effectively 0 minutes of my life thinking about raising children, child psychology, whether EAs should have children, etc.
So, fist of all, SBF probably also had “regardless of the feelings of the regulators” attitude. Financial regulation around bonified gambling sites (i am prediction market fan, but that it what they ultimately are) are there for reasons. Chesterton’s fence and all that. Not saying that regulations couldn’t be improved, i just really dislike sentiment in that sentence, especially in current situation.
Also, you can create prediction markets with real money without crypto. And there is/was many of them.
If there is some impressive diamond in crypto ecosystem, prediction markets arent it, and i am not aware of any.
Agree, boycotts should be public, therefore my post :).
On the other hand, I don’t see myself being able to push this to the next level (very little personal fit). I would be very happy if someone did.