Day job: Cybersecurity
From England, now living in Estonia.
Day job: Cybersecurity
From England, now living in Estonia.
The other problem I see is that there’s no modifier here for “actually being correct”. If person A presents a correct mathematical proof for X, and person B presents a mathematical proof for not X that is actually false, do they both get 20 points?
If you check the proofs yourself and you can see that one is obviously wrong and the other is not obviously (to you) wrong then you only give the not-obviously-wrong one 20 points. If you can’t tell which is wrong then they cancel out. If a professor then comes along and says “that proof is wrong, because [reason that you can’t understand], but the other one is OK” then epistemically it boils down to “tenured academic in field − 6 points” for the proof that the professor says is OK.
Thank you for explaining the “Big borderless workspace” concept. This is the first time I have seen a reasonable-looking argument in favour of company policies restricting employees’ actions outside work, something which I had previously seen as a pure cultural-imperialist power grab by oppressive bosses.
A lot of work does go into it, but the users will mostly ignore that work and continue using “[Dog’s name] + [Wife’s birthday]” as a password (and that’s if you’re lucky).
I don’t understand. Who is the victim that I am blaming here? I was trying to support FLI by pointing out that, in my experience, when I hear an organisation described as “far-right”, that tells me very little except that it is probably not openly and explicitly Marxist. Perhaps Nya Dagbladet is one of the rare exceptions who actually deserve the label—I don’t know, I never heard of them before this controversy. But I definitely sympathise with FLI’s not immediately giving 100% credence to the accusations.
The quality of public discourse worldwide has degraded so badly, with casual name-calling using highly charged labels, that many of these types of accusations are open to question upon examination.
That is definitely my reaction to any story with “Far-Right” in the headline.
If SBF committed fraud, there’s a distinct possibility that SBF will use altruism as a defence and/or justification for his actions in the coming months.
Sadly, I think his having been the second largest donor to the Biden 2020 campaign fund will be a more effective defence. It certainly worked for the people who lost hundreds of billions of Other People’s Money in 2008.
If you have a software engineering background but no particular expertise in biology or information security, then I would suggest trying to find some existing open-source software project which is helpful to biosecurity work and then help to make it more robust and user-friendly. I haven’t worked in biosecurity myself, but I can tell you from experience in other areas of biology that there are many software packages written by scientists with no training in how to write robust and usable software, and so there is a lot of low-hanging fruit for someone who can configure automatic testing, use a debugger or profiler, or add a GUI or web front end.
Much of the work in biosecurity is related to handling and processing large amounts of data, so knowledge of how to work with distributed systems is in demand.
Nowadays the amounts have to be extremely large before it is worth the effort of setting up a distributed system. You can fit 1 TB of RAM and several hundred TB of disk space in a commodity 4U server at a price equivalent to a couple of weeks of salary + overhead for someone with the skills to set up a high performance cluster, port your software to work on it, or debug the mysterious errors.
I don’t have any in-depth knowledge of this field, but my guess is that, out of the set of interventions whose effectiveness is easy to measure, the most effective ones will be those that target internally or regionally displaced refugees in the third world, as opposed to those who make it to first world countries.
One reason for avoiding talking about “1-to-N” moral progress on a public EA forum is that it is inherently political. I agree with you on essentially all the issues you mentioned in the post, but I also realise that most people in the world and even in developed nations will find at least one of your positions grossly offensive—if not necessarily when stated as above, then certainly after they are taken to their logical conclusions.
Discussing how to achieve concrete goals in “1-to-N” moral progress would almost certainly lead “moral reactionaries” to start attacking the EA community, calling us “fascists” / “communists” / “deniers” / “blasphemers” depending on which kind of immorality they support. This would make life very difficult for other EAs.
Maybe the potential benefits are large enough to exceed the costs, but I don’t even know how we could go about estimating either of these.
I would love to see hiring done better at EA organizations, and if there was some kind of “help EA orgs do hiring better” role I would jump at the chance.
This would be great. Changing the human parts of the hiring process would be a lot of work, but if you can just get organizations to use some kind of software that automatically sends out “We received your application” and “Your application was rejected” e-mails then that would be a good start.
Good point. So if we can’t hope for state alignment then that is an even stronger reason to oppose building state capabilities.
If there is a gene for “needing less sleep, high behavioural drive, etc”, which seems like it ought to give an evolutionary advantage, and yet only a very small fraction of the population have the gene, there must be a reason for this.
I can think of the following possibilities:
It is a recent mutation.
The selective advantage of needing less sleep is not as great as it seems. (e.g. before artificial lighting was widespread, you couldn’t get much done with your extra hours of wakefulness)
The gene also has some kind of selective disadvantage. (If we are lucky, the disadvantage will be something like “increased nutritional requirements” which is not a big problem in the present day.)
Do you have any idea which of these is the case?
Improving state capacity without ensuring the state is aligned to human values is just as bad as working on AI capabilities without ensuring that the AI is aligned to human values. The last few years have drastically reduced my confidence in “state alignment” even in so-called “liberal” democracies.
Some additional relevant historical background: in 1938, and especially before Kristallnacht, it was not at all obvious how bad the Nazi persecution of Jews would subsequently become. The Wannsee Conference, where the Nazi leadership decided to implement the “final solution”, was still four years in the future. The pogroms of 19th-century Russia were still within living memory, and Western democracies still had colonial empires, Jim Crow laws, lynchings, and similar abominations. Without accurate statistics, it would have been hard to tell whether the newspaper stories coming out of Germany were any worse.
It would be interesting to hear what gave the organisers of the Kindertransport the foresight to know that this problem was urgent.
Pedantry: it’s Kristallnacht not Kristelnacht.
Given that the immediate previous heading is “Take seriously the idea that you may be doing harm”, I think we should give the OP the benefit of the doubt that he is aware that fully open borders might cause harms as well as benefits.
I have worked as a programmer in academia (at a scientific research institute rather than a university), as my first job after a PhD in a natural science field where I had done some programming for data analysis etc. My main motivation was to get a CV entry with “software developer” in the job title, such that I would then have an easier time finding a software job in the private sector. (With hindsight I don’t think this was necessary, but there are probably many people now under the same misapprehension that I was then.) Depending on the kind of programming skills you are looking for, it might be possible to find someone in a similar situation.
One possibly scalable intervention here would be a dating site (or other matchmaking service) that didn’t have the fundamental conflict of interest where its income stream depends on its users failing to form successful long-term relationships.
The probability of success would be low, and even if you did gain a large market share it would only solve a fraction of the problem, but the cost of trying might be low enough that despite those factors it would still be a worthwhile philanthropic investment.
Stroomi Rand on hästi pikk. Kas võid õelda tapsemalt, kus see piknik asub? Või kas lihtsalt ei saa õelda, sest ei tea, mis grillikoht on vaba?