I can’t say how effective they are in this space, but UNHCR is active and reputable.
Terrible situation.
I can’t say how effective they are in this space, but UNHCR is active and reputable.
Terrible situation.
I have been following AUKUS developments in Australia, and have tried to get local EAs interested to little avail.
This should be a hugely important issue to the EA community.
I personally donate to groups like APHEDA https://www.apheda.org.au/ on a hunch that they are effective.
My suspicion is that this community neglects promising opportunities in this space—and exploring it myself.
Respectfully disagree with your example of a website.
In a commercial setting, the client would want to examine and approve the solution (website) in some sort of test environment first.
Even if the company provided end to end service, the implementation (buying domain etc) would be done by a human or non-AI software.
However, I do think it’s possible the AI might choose to inject malicious code, that is hard to review.
And I do like your example about terrorism with AI. However, police/govt can also counter the terrorists with AI too, similar to how all tools made by humans are used by good/bad actors. And generally, the govt should have access to the more powerful AI & cybersecurity tools. I expect the govt AI would come up with solutions too, at least as good, and probably better than the attacks by terrorists.
One of the reasons I am skeptical, is that I struggle to see the commercial incentives to develop AI in a direction that is X-risk level.
e.g. paperclip scenario, commercially, a business would use an ai to develop and present a solution to a human. Like how google maps will suggest the optimal route. But the ai would never be given free reign to both design the solution and to action it, and to have no human oversight. There’s no commercial incentive for a business to act like that.
Especially for “dumb” AI as you put it, AI is there to suggest things to humans, in commercial applications, but rarely to implement (I can’t think of a good example—maybe automated call centre?) the solution and to implement the solution without oversight by a human.
In a normal workplace, management signs off on the solution suggested my juniors. And that seems to be how AI is used in business. AI presents a solution, and then a human/ approves it and a human implements it also.
Is there anything that makes you skeptical that AI is an existential risk?
I felt the article was pretty concrete in saying exactly that,”crypto is bad …”. It didn’t strike me as high level/ abstract at all.
I’m not the author, but there was a very prescient critique submitted to the EA criticism contest, that went underappreciated. https://medium.com/@sven_rone/the-effective-altruism-movement-is-not-above-conflicts-of-interest-25f7125220a5
UPDATE: actually I realised did specifically mention this critique as an example.
Evidence-based Effective Altruists
Hi Peter,
Is knowledge of/aptitude with CTMC common among actuaries?
Yes, it’s required learning for actuaries, they may just need to brush up on their lecture notes.
Also, have you considered doing some more work to apply it to ER with expert support?
Absolutely. I think, before I embark on further work, I would really like to talk with cause prioritisers/grant-makers to confirm that they would have confidence in this kind of modelling, and to understand what kinds of outputs they would value.
Very much agree RE time-inhomogenous. Some people may see them as a bug (fair) but in many ways they are a feature. I’ve said in the post that CTMCs can help disagreeing X-Risk modellers understand the precise source of their disagreements (ie differing time-inhomogeneity assumptions).
Thank you for this. I think you’re right.
I’ll issue a correction.
I could only find one, Robert Menendez, with positions that might be deemed anti-crypto before the FTX collapse. He sponsored some anti-money laundering some bills taking aim against Russia, Venezuela, El Salvador https://www.coinbase.com/public-policy/legislative-portal/nj/senate/n6vBNOTG2gwWQ1c3jOQqn
A handful of these candidates have also become anti-crypto post-FTX collapse, and beginning to return/donate the money recevied from FTX/SBF. Chuy Garcia is one example.
Is there any chance you could reconsider? This post is not about my personal politics, or advocacy about any political candidates.
It’s about the perceived misuse of an EA-linked fund called Protect Our Future.
The diagonal entries are defined in another way. See the link:
Apologies, I could have made this clearer. It is only those diagonal entries which are allowed to be negative. In fact they must be negative (or zero).
Technically, the diagonal entries are transitions from state i to i (ie. they are not really transitions but rather a measurement of “retention”). You can think of the positive or negative sign as indicating if it is a measure of transitioning away from the state, or retention in the state.
And why do you presume your sources are better.
Your source is a fringe twitter account, followed by alt-right accounts, cherry picking bits and pieces from journals. It doesn’t even properly link to the primary sources so I can’t even examine the the weakness/ context.
Worry more about my Jacobs’s sources contradicting your sources.
You and I have a very opposite reflection of the Sam Harris vs Ezra Klein fiasco.
I’d like to hear what you think about Klein’s point that environmental factors explain may >100% of the black-white iq gap, and yet this is alien in the race realism discourse. https://forum.effectivealtruism.org/posts/ALzE9JixLLEexTKSq/cea-statement-on-nick-bostrom-s-email?commentId=YN85c93DD3EiNLFfo
counterproductive for the goal of fighting racism to stake your case on scientific claims that could turn out to be false.
There is so much evidence at this point against race realism/ HBD. There is no possibility of it “could be false” without evoking some grand conspiracy. Can we never call it pseudoscience? My goal is to fight for scientific truth, not some anti-racist agenda. Check out Ben Jacob’s great resources.
I think it’s fair for Davis to characterise Schmidt as a longtermist.
He’s recently been vocal about AI X-Risk. He funded Carrick Flynn’s campaign which was openly longtermist, via the Future Forward PAC alongside Moskovitz & SBF. His philanthropic organisation Schmidt Futures has a future focused outlook and funds various EA orgs.
And there are longtermists who are pro AI like Sam Altman, who want to use AI to capture the lightcone of future value.
https://www.cnbc.com/amp/2023/05/24/ai-poses-existential-risk-former-google-ceo-eric-schmidt-says.html