Day job: Cybersecurity
From England, now living in Estonia.
Day job: Cybersecurity
From England, now living in Estonia.
All open borders advocates support broadly similar policies (reducing barriers to migration).
There are going to be large differences in people’s views about what (if any) other policies are going to be needed to avoid open borders turning into national suicide. For instance, a right-libertarian open borders advocate would say “open borders is incompatible with a generous welfare state, and closed borders is a moral abomination, therefore we must abolish the welfare state (or at least shrink payouts to a level where they are matched to incomes in the poorest country in the world)” whereas a leftist advocate would prefer to trust in migrants quickly becoming productive net taxpayers without any need for welfare restrictions.
The answer I would like to be true is “EA funders are avoiding associating themselves with Open Borders advocacy because it has become a politically partisan issue, and they want to be seen as non-partisan.” However, given that e.g. GiveWell is calling the personal foundation of a controversial political figure “highly aligned”, I don’t think that is the case.
Please give the full names of organisations you refer to with acronyms (CHAI and PATH). That’s particularly important for CHAI since there is also the “Centre for Human-compatible Artificial Intelligence” with the same acronym, so it can be confusing for readers who are familiar with one but not the other (like me until I read a similar article earlier this year).
When you only use the acronym “CHAI” I assumed you were talking about the “Center for Human Compatible Artificial Intelligence” ( https://humancompatible.ai/ ) since this has strong and obvious links to Effective Altruism. Then I followed the link and saw you meant the “Clinton Health Access Initiative”. You should clarify to stop other people having the same misunderstanding.
Stroomi Rand on hästi pikk. Kas võid õelda tapsemalt, kus see piknik asub? Või kas lihtsalt ei saa õelda, sest ei tea, mis grillikoht on vaba?
The other problem I see is that there’s no modifier here for “actually being correct”. If person A presents a correct mathematical proof for X, and person B presents a mathematical proof for not X that is actually false, do they both get 20 points?
If you check the proofs yourself and you can see that one is obviously wrong and the other is not obviously (to you) wrong then you only give the not-obviously-wrong one 20 points. If you can’t tell which is wrong then they cancel out. If a professor then comes along and says “that proof is wrong, because [reason that you can’t understand], but the other one is OK” then epistemically it boils down to “tenured academic in field − 6 points” for the proof that the professor says is OK.
Thank you for explaining the “Big borderless workspace” concept. This is the first time I have seen a reasonable-looking argument in favour of company policies restricting employees’ actions outside work, something which I had previously seen as a pure cultural-imperialist power grab by oppressive bosses.
A lot of work does go into it, but the users will mostly ignore that work and continue using “[Dog’s name] + [Wife’s birthday]” as a password (and that’s if you’re lucky).
I don’t understand. Who is the victim that I am blaming here? I was trying to support FLI by pointing out that, in my experience, when I hear an organisation described as “far-right”, that tells me very little except that it is probably not openly and explicitly Marxist. Perhaps Nya Dagbladet is one of the rare exceptions who actually deserve the label—I don’t know, I never heard of them before this controversy. But I definitely sympathise with FLI’s not immediately giving 100% credence to the accusations.
The quality of public discourse worldwide has degraded so badly, with casual name-calling using highly charged labels, that many of these types of accusations are open to question upon examination.
That is definitely my reaction to any story with “Far-Right” in the headline.
If SBF committed fraud, there’s a distinct possibility that SBF will use altruism as a defence and/or justification for his actions in the coming months.
Sadly, I think his having been the second largest donor to the Biden 2020 campaign fund will be a more effective defence. It certainly worked for the people who lost hundreds of billions of Other People’s Money in 2008.
If you have a software engineering background but no particular expertise in biology or information security, then I would suggest trying to find some existing open-source software project which is helpful to biosecurity work and then help to make it more robust and user-friendly. I haven’t worked in biosecurity myself, but I can tell you from experience in other areas of biology that there are many software packages written by scientists with no training in how to write robust and usable software, and so there is a lot of low-hanging fruit for someone who can configure automatic testing, use a debugger or profiler, or add a GUI or web front end.
Much of the work in biosecurity is related to handling and processing large amounts of data, so knowledge of how to work with distributed systems is in demand.
Nowadays the amounts have to be extremely large before it is worth the effort of setting up a distributed system. You can fit 1 TB of RAM and several hundred TB of disk space in a commodity 4U server at a price equivalent to a couple of weeks of salary + overhead for someone with the skills to set up a high performance cluster, port your software to work on it, or debug the mysterious errors.
I don’t have any in-depth knowledge of this field, but my guess is that, out of the set of interventions whose effectiveness is easy to measure, the most effective ones will be those that target internally or regionally displaced refugees in the third world, as opposed to those who make it to first world countries.
One reason for avoiding talking about “1-to-N” moral progress on a public EA forum is that it is inherently political. I agree with you on essentially all the issues you mentioned in the post, but I also realise that most people in the world and even in developed nations will find at least one of your positions grossly offensive—if not necessarily when stated as above, then certainly after they are taken to their logical conclusions.
Discussing how to achieve concrete goals in “1-to-N” moral progress would almost certainly lead “moral reactionaries” to start attacking the EA community, calling us “fascists” / “communists” / “deniers” / “blasphemers” depending on which kind of immorality they support. This would make life very difficult for other EAs.
Maybe the potential benefits are large enough to exceed the costs, but I don’t even know how we could go about estimating either of these.
I would love to see hiring done better at EA organizations, and if there was some kind of “help EA orgs do hiring better” role I would jump at the chance.
This would be great. Changing the human parts of the hiring process would be a lot of work, but if you can just get organizations to use some kind of software that automatically sends out “We received your application” and “Your application was rejected” e-mails then that would be a good start.
Good point. So if we can’t hope for state alignment then that is an even stronger reason to oppose building state capabilities.
If there is a gene for “needing less sleep, high behavioural drive, etc”, which seems like it ought to give an evolutionary advantage, and yet only a very small fraction of the population have the gene, there must be a reason for this.
I can think of the following possibilities:
It is a recent mutation.
The selective advantage of needing less sleep is not as great as it seems. (e.g. before artificial lighting was widespread, you couldn’t get much done with your extra hours of wakefulness)
The gene also has some kind of selective disadvantage. (If we are lucky, the disadvantage will be something like “increased nutritional requirements” which is not a big problem in the present day.)
Do you have any idea which of these is the case?
Improving state capacity without ensuring the state is aligned to human values is just as bad as working on AI capabilities without ensuring that the AI is aligned to human values. The last few years have drastically reduced my confidence in “state alignment” even in so-called “liberal” democracies.
Hang on—why do you have to be a student to join an EA group?