Hi there! I’m an EA from Madrid. I am currently finishing my Ph.D. in quantum algorithms and would like to focus my career on AI Safety. Send me a message if you think I can help :)
PabloAMC
I think there is a chance you’re overcomplicating it a bit 😅. I think she is just trying to create a culture where people don’t feel socially anxious about leaving EA if it is good for their mental health. Social norms are present everywhere, including EA, and even if we are quite nerdy and prone to rule-binding, the pressure and expectation to do good can conflate against some of the members’ mental health.
And then, she is also saying that everyone should feel they are entitled to choose not to sacrifice (a significant portion of) their happiness to improve the world, and not feel bad about it. People sometimes rationalize it in that it is not sustainable, but I prefer understanding just allowing people to not be maximally altruistic as opposed to maximally efficient with their altruistic budget. As in a similar spirit to https://mindingourway.com/youre-allowed-to-be-inconsistent/
I think my critique would be different from the ones described above. It would be that convincing people to go vegan (the end goal) takes time for them to interiorize the reasons and agree. It is naive to believe that we are Bayesian beings that have their preferences written in stone, and so if I make a really smart argument they will just change.
Rather, I believe it is more effective to invite them to try out things such as Meatless Monday, Veganuary, or similar. My intuition is that the key is showing that it’s not so costly to change (it really isn’t).
I acknowledge that the proposed kind of protest might help or might even be necessary in the future, but I’m not sure the time has come for this to succeed at a societal level. Part of this is because you are sending the message that going vegan has a very strong and narrow identity attached and that you expect other potential people going vegan to also pay the social cost of such behavior. If they perceive going vegan as expensive, they are probably even more likely to refuse it. Furthermore, the single fact that every time you eat with other people you refuse to eat meat is a way of signaling that you perceive that as wrong; and every time they decide where to go or what to buy they will be remembered of this fact.
So in summary, I’d rather go with the carrot than the stick here. At least for the time being.
Actually, Habiba, I think there is one more thing that you do during calls and is somewhat subtle: as noted elsewhere most impactful career paths require doing unconventional things and are very challenging. Talking to you helped me feel less scared of trying out these things, and supported me 😊. Perhaps this is less relevant to people with lots of social contact with other people in the community, but even my family does not understand yet why would I not want to aim for a nice peaceful career since “you can do good in any job”, and the kind of problems that we worry about are not popular. Perhaps we should create some position in local communities for this kind of support, though 🧐?
From the perspective of a PhD student in quantum computing, I would say that one should not worry excessively about quantum computing breaking cryptography. This is mainly for two reasons:
1. As pointed on other comments by RavenclawPrefect and beth, so called “post-quantum” cryptographic algorithms are being developed that should not be vulnerable to cryptography (NIST holds a contest to develop the future standard). I am not specially skilled on particularly this topic, but it seems that some approaches regarding Hash functions or lattices could be feasible. This are just the usual kind of public key mathematical cryptography, but with harder problems.
2. Even in the highly unlikely situation where the above point fails, quantum stuff gives you a solution: quantum cryptography is theoretically invulnerable to almost any kind of attacks. I say theoretically, because quantum devices are not perfect and an adversary may be able to exploit this to take advantage. The most famous quantum key distribution algorithms are called BB84 (the first one to be discovered) or Arthur Ekert’s one based on Bell Inequalities. To the best of my knowledge, the research edge now is on the topic of “Device independent quantum cryptography”, in which you are supposed to be using a device from a supplier that you may not trust. This path to secure cryptography is more physical one: just find a way to perform private key distribution in a safe way.
In conclusion, I do not expect cryptography to suffer from QC making it unfeasible, but rather it seems more likely that cryptography will become even harder to break.
That said, I am actually trying to figure out in my PhD if there could be any interesting areas of research where QC may be useful in the field of AI Safety. Arguments for it are that there exists a research topic called Quantum ML which is in its infancy still. On the other hand, AI safety may not require any specially compute expensive algorithms, but rather the right approaches (also maybe with higher level of abstraction you would have in QC). I say this because I would be very interested in hearing from anyone who would like to work on similar topics (because they have this background in particular) and/or would have particular hindsights for ideas that could help.
Thanks!
Hey Mantas! So while I think there is a chance that photonics will play a role in future AI hardware, unfortunately, my expertise is quite far from the hardware itself. Up to now, I have been doing quantum algorithms.
The problem though is that I think quantum computing will not play an important role in AI development. It may seem that the quadratic speedup that quantum computing provides in a range of problems is good enough to justify using it. However, if one takes into account the hardware requirements such as the error correction, you will be losing some 10 orders of magnitude of speed, which makes QC unlikely to help in generic problems.
Where QC shines is in analyzing and predicting the properties of quantum systems, such as chemistry and material science. This is by itself very useful, and it may bring up new batteries, new drugs… but it is different from AI.
Also, for cryptography there might be some applications but one can already use quantum-resistant classically cryptography, so I’m not very excited about cryptography as an application.
I think the best way for this kind of thing would be if some important donor (who they may know personally already) invites them directly to talk to GiveWell, for example, rather than us trying to sell the idea. Personal connections are the best way I can think of. And then books, some events… all help a bit, but I’d start with who they know.
I think the most important GITV moments are often connected to people, and I think we as a community should put the effort into understanding new people. In my case, I think I had not one but two different moments, and different people connected to those:
-
The first one happened in early 2018 in Oxford. I was lucky enough to have gone there for a master’s degree, and mid-way through it, I found out about one talk by Catherine Hollander, from Givewell, on making the best out of your money. I had always been interested in how to end poverty, so quite appealing. There, I met Darius Meissner, they invited me for a dinner after the event, and I think he really took the time to understand me well. In particular, I starkly remember one conversation where he argued that eating animals might not be good, and me saying that I cared more about the environmental reasons. This is something that in retrospect feels weird, almost as if the only reason I said that was because I heard other people believe so. But it also highlights something important: changing the (moral) views of people takes time, even when they make a lot of sense. The good thing was that such a simple conversation made me slip towards becoming a vegetarian, and I remember buying much less meat already there. 2018 was also a year that EAs were putting a lot of focus on longtermism. Indeed, after this first event, Huw from EA Oxford invited me to talk a bit and gave me a couple of books to read. Unfortunately, the talk, focussed mostly on longtermism did not resonate well with me. I was (1) confused about the fuzzy arguments on the “astronomical” importance of the long-term future, (2) unsure if that was even actionable, it’s not like going vegetarian or donating money. They were arguing for a career change, which felt probably too much. Later on, I attended a couple of career planning events, which I liked, but I never really bought in so much into longtermism. I think I somewhat took it into account when choosing my PhD, but it was far from being the main consideration. The second career retreat especially was a bit offputting. Too much crazy conversation around paperclips and the sort. I remember being in a conversation where someone was arguing that having fewer children was better because it would free you more time to do greater amounts of good. I honestly think this is the kind of weird stuff that is not really super helpful. Maybe intellectually interesting, but not … the kind of thing we should be focused on?
-
After that, I went back to Spain, Madrid, not knowing that there was an incipient group there. I found out about this group relatively quickly afterward because I enjoyed going to entrepreneurship events and they were hosting one intro event on the Google campus. I honestly did not know whether I would stick around or not. But I think the reason I did was that Pablo Melchor was there and was very welcoming, and willing to listen. They needed someone to help organize events and I helped him more or less until covid happened, when big changes happened and we became more international (now there is one large Hispanic speaking group 🙂). I also remember talking to Jaime Sevilla (a good friend of mine) and feeling a bit defensive, because he wanted to know what I was more interested in and encouraged me to take action and organize events straightforwardly 😜
From all of this, I try to remember that doing good as an EA is socially demanding, as it requires doing things people don’t usually do. For that reason, I try to give people time and space to learn. I think reading is a great way to learn more and get more engaged, and as a community, we have really good written material. On the other hand, my path to EA is very different from the people who, like Jaime, got interested in EA from a rationalist perspective. In any case, it is good to remember that at EA is very weird, and at some point you were on the other side of the conversation where you would have liked to feel included, and listened to.
-
A small comment: if feedback is scarce because of a lack of time, this increases the usefulness of going to conferences where you can meet grantmakers and speaking to them.
I also think that it would be worth exploring ways to give feedback with as little time cost as possible.
My intuition is that there is also some potential cultural damage, not from the money the community has, but from not communicating well that we also care a lot about many standard problems such as third world poverty. I feel that too often the cause prioritization step is taken for granted or obvious, and can lead to a culture where only “cool AI Safety stuff” is the only thing worth doing.
One advantage of centralized grantmaking though is that it can convey more information, due to the experience of the grantmakers. In particular, centralized decision-making allows for better comparisons between proposals. This can lead to only the most effective projects being carried out, as it would be the case with startups if one were to restrict himself to only top venture capitalists.
My question is more about what the capabilities of a superintelligence would be once equipped with a quantum computer
I think it would be an AGI very capable of chemistry :-)
one might even wonder what learnable quantum circuits / neural networks would entail.
Right now they just mean lots of problems :P More concretely, there are some results that indicate that quantum NN (or variational circuits, as they call them) are not likely to be more efficient for learning classical data than classical NN are. Although I agree this is a bit too much in the air yet.
Does alphafold et al render the quantum computing hopes to supercharge simulation of chemical/physical systems irrelevant?
By chemistry I mean electronic simulation. Other than that, proteins are quite classical, and that’s why alphafold works well, and why it is highly unlikely that neurons would have any quantum effects involved in their functioning.
Or would a ‘quantum version of alphafold’ trounce the original?
For this I even have a published article showing that (probably) no: https://arxiv.org/pdf/2101.10279.pdf (published in https://iopscience.iop.org/article/10.1088/2058-9565/ac4f2f/meta)
Where will exponential speedups play a role in practical problems? Simulation? Of just quantum systems, or does it help with simulating complex systems more generally? Any case where the answer is “yes” is worth thinking about the implications of wrt AI safety.
My intuition is that no, but if that were to be the case, then it is unlikely to be an issue for AI Safety: https://www.alignmentforum.org/posts/ZkgqsyWgyDx4ZssqJ/implications-of-quantum-computing-for-artificial
Thanks in any case, Mantas :)
I do think it is good to have the EA forum as a place of discussion and disagreement on how to improve the world.
So, I think perhaps our disagreement is that I don’t think we have reached the critical mass to stigmatizing it yet. In the US or western countries in general veganism is ~1-2%. Vegetarianism and flexitarianism might rise that up to 8 to 10%. My intuition is that at this level one would still be seen eccentric enough to signal that animals are worth welfare considerations, but also eccentric enough that you risk marginalization (and very little impact?) if you attempt structural change.
I don’t see convincing people to go vegan as the end goal
I think we should agree that the objective is to have fewer animals suffer from the current animal agriculture system (or even from life in the wild, if we want to go wild 😝). It seems that 90 to 95% of this objective is to make people eat less or no meat. So I’d say that the objective should be something along those lines, such as going vegan, no?
I divide my donation strategy into two components:
-
The first one is a monthly donation to Ayuda Efectiva, the effective giving charity in Spain, which allows fiscal deduction too. For the time being, they mostly support Global health and poverty causes, which is boringly awesome.
-
Then I make one-off donations to specific opportunities that appear. Those include, for example, one donation to Global Catastrophic Risks, to support their work on recommendations for the EU AI act sandbox (to be first deployed in Spain), some volunteering work for FLI existential AI risk community, my donation to this donation election, to make donations within the EA community more democratic :)
For this donation election I have voted for Rethink Priorities, the EA long term future fund, and ALLFED. ALLFED work seems to be pretty necessary and they are often overlooked, so I am happy to support them. The other two had relatively convincing posts arguing for what they could do with additional funding. In particular, I am inclined to believe Rethink Priorities work benefits the EA community quite widely and am happy to support them, and would love them to keep carrying out the annual survey.
-
I certainly want to be replaced by better AI Safety researchers (or any other worker in an important area) so that I don’t have to make the personal sacrifice to work on them. I still put a lot of effort in being the best, but secretly wish there is someone better to do the job. Funny. Also, a nice excuse to celebrate rejection if you apply to an EA job.
I think it is wrong to say that Syrian refugee crisis might have cost Germany 0.5T. My source: https://www.igmchicago.org/surveys/refugees-in-germany-2/. To be fair though I have not found a posterior analysis, and I am far from an expert.
You’re right Ryan, I’ll modify the second complicated sentence. I am actually not sure what is the difference between tenure and tenure track, to tell the truth.
However, in one of the documents above I saw that institution is not such a strong predictor (point 4), but h index seemed useful (in point 2 the h-index is discussed).
Hi Steven,
Possible claim 2: “We should stop giving independent researchers and nonprofits money to do AGI-x-risk-mitigating research, because academia is better.” You didn’t exactly say this, but sorta imply it. I disagree.
I don’t agree with possible claim 2. I just say that we should promote academic careers more than independent researching, not that we should stop giving them money. I don’t think money is the issue.
Thanks
Thanks for posting! My current belief is that EA has not become purely about longtermism. In fact, recently it has been argued in the community that longtermism is not necessary to pursue the kind of things we currently do, as pandemics or AI Safety can also be justified in terms of preventing global catastrophes.
That being said, I’d very much prefer the EA community bottom line to be about doing “the most good” rather than subscribing to longtermism or any other cool idea we might come up with. These are all subject to change and debate, whether doing the most good really shouldn’t.
Additionally, it might be worth highlighting, especially when talking with unfamiliarized people, that we deeply care about all present people suffering. Quoting Nate Soares: