Hi there! I’m an EA from Madrid. I am currently finishing my Ph.D. in quantum algorithms and would like to focus my career on AI Safety. Send me a message if you think I can help :)
PabloAMC
Meditations on careers in AI Safety
The role of academia in AI Safety.
A tough career decision
Estimation of probabilities to get tenure track in academia: baseline and publications during the PhD.
Reflections on EA Global London
The Effective Altruism culture
I think there is a chance you’re overcomplicating it a bit 😅. I think she is just trying to create a culture where people don’t feel socially anxious about leaving EA if it is good for their mental health. Social norms are present everywhere, including EA, and even if we are quite nerdy and prone to rule-binding, the pressure and expectation to do good can conflate against some of the members’ mental health.
And then, she is also saying that everyone should feel they are entitled to choose not to sacrifice (a significant portion of) their happiness to improve the world, and not feel bad about it. People sometimes rationalize it in that it is not sustainable, but I prefer understanding just allowing people to not be maximally altruistic as opposed to maximally efficient with their altruistic budget. As in a similar spirit to https://mindingourway.com/youre-allowed-to-be-inconsistent/
I think my critique would be different from the ones described above. It would be that convincing people to go vegan (the end goal) takes time for them to interiorize the reasons and agree. It is naive to believe that we are Bayesian beings that have their preferences written in stone, and so if I make a really smart argument they will just change.
Rather, I believe it is more effective to invite them to try out things such as Meatless Monday, Veganuary, or similar. My intuition is that the key is showing that it’s not so costly to change (it really isn’t).
I acknowledge that the proposed kind of protest might help or might even be necessary in the future, but I’m not sure the time has come for this to succeed at a societal level. Part of this is because you are sending the message that going vegan has a very strong and narrow identity attached and that you expect other potential people going vegan to also pay the social cost of such behavior. If they perceive going vegan as expensive, they are probably even more likely to refuse it. Furthermore, the single fact that every time you eat with other people you refuse to eat meat is a way of signaling that you perceive that as wrong; and every time they decide where to go or what to buy they will be remembered of this fact.
So in summary, I’d rather go with the carrot than the stick here. At least for the time being.
Actually, Habiba, I think there is one more thing that you do during calls and is somewhat subtle: as noted elsewhere most impactful career paths require doing unconventional things and are very challenging. Talking to you helped me feel less scared of trying out these things, and supported me 😊. Perhaps this is less relevant to people with lots of social contact with other people in the community, but even my family does not understand yet why would I not want to aim for a nice peaceful career since “you can do good in any job”, and the kind of problems that we worry about are not popular. Perhaps we should create some position in local communities for this kind of support, though 🧐?
An appraisal of the Future of Life Institute AI existential risk program
[Question] Should the EA community have a DL engineering fellowship?
Bill Gates book on pandemic prevention
[Question] How to get more academics enthusiastic about doing AI Safety research?
From the perspective of a PhD student in quantum computing, I would say that one should not worry excessively about quantum computing breaking cryptography. This is mainly for two reasons:
1. As pointed on other comments by RavenclawPrefect and beth, so called “post-quantum” cryptographic algorithms are being developed that should not be vulnerable to cryptography (NIST holds a contest to develop the future standard). I am not specially skilled on particularly this topic, but it seems that some approaches regarding Hash functions or lattices could be feasible. This are just the usual kind of public key mathematical cryptography, but with harder problems.
2. Even in the highly unlikely situation where the above point fails, quantum stuff gives you a solution: quantum cryptography is theoretically invulnerable to almost any kind of attacks. I say theoretically, because quantum devices are not perfect and an adversary may be able to exploit this to take advantage. The most famous quantum key distribution algorithms are called BB84 (the first one to be discovered) or Arthur Ekert’s one based on Bell Inequalities. To the best of my knowledge, the research edge now is on the topic of “Device independent quantum cryptography”, in which you are supposed to be using a device from a supplier that you may not trust. This path to secure cryptography is more physical one: just find a way to perform private key distribution in a safe way.
In conclusion, I do not expect cryptography to suffer from QC making it unfeasible, but rather it seems more likely that cryptography will become even harder to break.
That said, I am actually trying to figure out in my PhD if there could be any interesting areas of research where QC may be useful in the field of AI Safety. Arguments for it are that there exists a research topic called Quantum ML which is in its infancy still. On the other hand, AI safety may not require any specially compute expensive algorithms, but rather the right approaches (also maybe with higher level of abstraction you would have in QC). I say this because I would be very interested in hearing from anyone who would like to work on similar topics (because they have this background in particular) and/or would have particular hindsights for ideas that could help.
Thanks!
Science policy as a possible EA cause area: problems and solutions
Hey Mantas! So while I think there is a chance that photonics will play a role in future AI hardware, unfortunately, my expertise is quite far from the hardware itself. Up to now, I have been doing quantum algorithms.
The problem though is that I think quantum computing will not play an important role in AI development. It may seem that the quadratic speedup that quantum computing provides in a range of problems is good enough to justify using it. However, if one takes into account the hardware requirements such as the error correction, you will be losing some 10 orders of magnitude of speed, which makes QC unlikely to help in generic problems.
Where QC shines is in analyzing and predicting the properties of quantum systems, such as chemistry and material science. This is by itself very useful, and it may bring up new batteries, new drugs… but it is different from AI.
Also, for cryptography there might be some applications but one can already use quantum-resistant classically cryptography, so I’m not very excited about cryptography as an application.
I think the best way for this kind of thing would be if some important donor (who they may know personally already) invites them directly to talk to GiveWell, for example, rather than us trying to sell the idea. Personal connections are the best way I can think of. And then books, some events… all help a bit, but I’d start with who they know.
I think the most important GITV moments are often connected to people, and I think we as a community should put the effort into understanding new people. In my case, I think I had not one but two different moments, and different people connected to those:
-
The first one happened in early 2018 in Oxford. I was lucky enough to have gone there for a master’s degree, and mid-way through it, I found out about one talk by Catherine Hollander, from Givewell, on making the best out of your money. I had always been interested in how to end poverty, so quite appealing. There, I met Darius Meissner, they invited me for a dinner after the event, and I think he really took the time to understand me well. In particular, I starkly remember one conversation where he argued that eating animals might not be good, and me saying that I cared more about the environmental reasons. This is something that in retrospect feels weird, almost as if the only reason I said that was because I heard other people believe so. But it also highlights something important: changing the (moral) views of people takes time, even when they make a lot of sense. The good thing was that such a simple conversation made me slip towards becoming a vegetarian, and I remember buying much less meat already there. 2018 was also a year that EAs were putting a lot of focus on longtermism. Indeed, after this first event, Huw from EA Oxford invited me to talk a bit and gave me a couple of books to read. Unfortunately, the talk, focussed mostly on longtermism did not resonate well with me. I was (1) confused about the fuzzy arguments on the “astronomical” importance of the long-term future, (2) unsure if that was even actionable, it’s not like going vegetarian or donating money. They were arguing for a career change, which felt probably too much. Later on, I attended a couple of career planning events, which I liked, but I never really bought in so much into longtermism. I think I somewhat took it into account when choosing my PhD, but it was far from being the main consideration. The second career retreat especially was a bit offputting. Too much crazy conversation around paperclips and the sort. I remember being in a conversation where someone was arguing that having fewer children was better because it would free you more time to do greater amounts of good. I honestly think this is the kind of weird stuff that is not really super helpful. Maybe intellectually interesting, but not … the kind of thing we should be focused on?
-
After that, I went back to Spain, Madrid, not knowing that there was an incipient group there. I found out about this group relatively quickly afterward because I enjoyed going to entrepreneurship events and they were hosting one intro event on the Google campus. I honestly did not know whether I would stick around or not. But I think the reason I did was that Pablo Melchor was there and was very welcoming, and willing to listen. They needed someone to help organize events and I helped him more or less until covid happened, when big changes happened and we became more international (now there is one large Hispanic speaking group 🙂). I also remember talking to Jaime Sevilla (a good friend of mine) and feeling a bit defensive, because he wanted to know what I was more interested in and encouraged me to take action and organize events straightforwardly 😜
From all of this, I try to remember that doing good as an EA is socially demanding, as it requires doing things people don’t usually do. For that reason, I try to give people time and space to learn. I think reading is a great way to learn more and get more engaged, and as a community, we have really good written material. On the other hand, my path to EA is very different from the people who, like Jaime, got interested in EA from a rationalist perspective. In any case, it is good to remember that at EA is very weird, and at some point you were on the other side of the conversation where you would have liked to feel included, and listened to.
-
A small comment: if feedback is scarce because of a lack of time, this increases the usefulness of going to conferences where you can meet grantmakers and speaking to them.
I also think that it would be worth exploring ways to give feedback with as little time cost as possible.
Thanks for posting! My current belief is that EA has not become purely about longtermism. In fact, recently it has been argued in the community that longtermism is not necessary to pursue the kind of things we currently do, as pandemics or AI Safety can also be justified in terms of preventing global catastrophes.
That being said, I’d very much prefer the EA community bottom line to be about doing “the most good” rather than subscribing to longtermism or any other cool idea we might come up with. These are all subject to change and debate, whether doing the most good really shouldn’t.
Additionally, it might be worth highlighting, especially when talking with unfamiliarized people, that we deeply care about all present people suffering. Quoting Nate Soares: