Hi there! I’m an EA from Madrid. I am currently finishing my Ph.D. in quantum algorithms and would like to focus my career on AI Safety. Send me a message if you think I can help :)
PabloAMC
Would current quantum computing techniques, assuming the hardware to run them on is available, be able to more quickly/precisely derive the % portions of those agents at say State1 would take Action1, Action2, or Action3?
I think so! But I also think that you can do it easily with a bunch of GPUs. Let me explain: the idea is parallelizing the process of the agents and then just sampling from the agents. You can do that using “quantum parallelism”, but I feel it will be simpler to just use GPUs for that.
I believe that you might be able to get some (polynomial, probably quadratic) speedup in the precision of the estimate using quantum resources, although I am not sure how useful is that.
It is certainly amazing the the world nominal and PPP GDP are smaller than the Global Debt in the non financial sector, which is at the same time smaller than the Global Household Wealth. I am a bit confused: how can de GDP be smaller than the Global Household Wealth?
Why isn’t GDP also accumulated? I mean, GDP is wealth in any given year (perhaps discounting debt somehow?)
Actually this post may be of interest to read on the topic: https://forum.effectivealtruism.org/posts/bsE5t6qhGC65fEpzN/growth-and-the-case-against-randomista-development
I regardless believe that outside (and arguably within) quantum cryptanalysis the applications will be fairly limited.
I might be confused, but did we agree that the most useful application of quantum computing would be on chemistry and material science? I thought so, but the above sentence seems to say otherwise...
Estimation of probabilities to get tenure track in academia: baseline and publications during the PhD.
You’re right Ryan, I’ll modify the second complicated sentence. I am actually not sure what is the difference between tenure and tenure track, to tell the truth.
However, in one of the documents above I saw that institution is not such a strong predictor (point 4), but h index seemed useful (in point 2 the h-index is discussed).
I strongly agree with Ryan that success is to a relatively large degree predictable, as can be done in the PCA decomposition of point 2 above, figure 1C.
I think it would be very valuable to have such a model, but the current code is only for biology (the impact factor will fail for instance for anything different). If one wanted to fit a model to predict it, it could probably use google scholar and arxiv, but the trickiest part would be to recover the position of those people (the target), which may partially be done using google scholar.
There is something I would really like to know, although it is only tangentially related to the above: how is taking a postdoctoral position at FHI seen comparatively with other “standard academia” paths? How could it affect future research career options? I am personally more interested from the technical side, but feel free to comment on whatever you feel is interesting.
And since you mention it:
What EU AI Policy we should push for? And what mechanisms do you see the EU AI policy having a positive impact, compared with the US for example? What ways do you see for technical people to influence AI governance?
Many thanks in advance!
I’d like to know if we could inscribe a bit latter than the 12th of October. In Madrid we are opening an university group but it is not set up yet.
I would say you basically cannot get tenure if you don’t get a PhD, so dropouts are not taken into account in any of the previous statistics as far as I understood them. All this metrics are of the kind of: x% of PhD alumni got tenure, or similar.
I actually agree that taking into account the private sector could help, but I am much less certain about the freedom they give you to research those topics, beyond the usual suspects. That was why I was focussing on academia.
I just posted another article I found on average publication rates in Norway for different positions, ages, fields and gender.
Hey Asya! I’ve seen that you’ve received a comment prize on this. Congratulations! I have found it interesting. I was wondering: you give these two reasons for rejecting a funding application
Project would be good if executed exceptionally well, but applicant doesn’t have a track record in this area, and there are no references that I trust to be calibrated to vouch for their ability.
Applicant wants to do research on some topic, but their previous research on similar topics doesn’t seem very good.
My question is: what method would you use to evaluate the track record of someone who has not done a Ph.D. in AI Safety, but rather on something like Physics (my case :) )? Do you expect the applicant to have some track record in AI Safety research? I do not plan on applying for funding on the short term, but I think I would find some intuition on this valuable. I also ask because I find it hard to calibrate myself on the quality of my own research.
[Question] 1h-volunteers needed for a small AI Safety-related research project
[Question] How to get more academics enthusiastic about doing AI Safety research?
I agree that the creation of incentives is a good framing for the problem. I wanted to notice some things though:
Academics often have much more freedom to research what they want, and most incentives are number of publications or citations. Since you can publish AIS papers in standard top conferences, I do not see a big problem, although I might be wrong, of course.
Changing the incentives is either more difficult (changing protocols at universities or government bodies?) or just giving money, which the community seems to be doing already. That’s what makes me think that the academic interest is more of a bottleneck, but I am not superinformed.
I think the best way for this kind of thing would be if some important donor (who they may know personally already) invites them directly to talk to GiveWell, for example, rather than us trying to sell the idea. Personal connections are the best way I can think of. And then books, some events… all help a bit, but I’d start with who they know.
The main issue I find with this is that those “standout” charities should be treated as possible future top charities, and not publishing an equivalent list might make it harder for them to become so.
I’m a bit enthusiastic about this idea. However, as already mentioned by others:
I would try it on a small scale instead of debating much.
I’d rather have it look more like an unemployment fund than as a charity, specifically because probably there are some issues on tax deduction if you can actually get the money back.
I think the main appeal of this idea is that most great ways to improve the world often involved careers change, which involves some level of personal risk. For example, I currently face the option of earning probably quite a lot via quantum computing after having done a Ph.D. in the topic, or attempting a career change. I know that studying things such as AI to get to AI Safety does not look too risky, but if we can find ways to mitigate the personal risk of career changes, that’d be great in my opinion.
In other words:
In personal matters, you want to be conservative because 50 million do not make you 50 times happier than 1 million.
On altruistic matters, you should aim for risky and large upside options.
This fund could help bridge those different aims.
From the perspective of a PhD student in quantum computing, I would say that one should not worry excessively about quantum computing breaking cryptography. This is mainly for two reasons:
1. As pointed on other comments by RavenclawPrefect and beth, so called “post-quantum” cryptographic algorithms are being developed that should not be vulnerable to cryptography (NIST holds a contest to develop the future standard). I am not specially skilled on particularly this topic, but it seems that some approaches regarding Hash functions or lattices could be feasible. This are just the usual kind of public key mathematical cryptography, but with harder problems.
2. Even in the highly unlikely situation where the above point fails, quantum stuff gives you a solution: quantum cryptography is theoretically invulnerable to almost any kind of attacks. I say theoretically, because quantum devices are not perfect and an adversary may be able to exploit this to take advantage. The most famous quantum key distribution algorithms are called BB84 (the first one to be discovered) or Arthur Ekert’s one based on Bell Inequalities. To the best of my knowledge, the research edge now is on the topic of “Device independent quantum cryptography”, in which you are supposed to be using a device from a supplier that you may not trust. This path to secure cryptography is more physical one: just find a way to perform private key distribution in a safe way.
In conclusion, I do not expect cryptography to suffer from QC making it unfeasible, but rather it seems more likely that cryptography will become even harder to break.
That said, I am actually trying to figure out in my PhD if there could be any interesting areas of research where QC may be useful in the field of AI Safety. Arguments for it are that there exists a research topic called Quantum ML which is in its infancy still. On the other hand, AI safety may not require any specially compute expensive algorithms, but rather the right approaches (also maybe with higher level of abstraction you would have in QC). I say this because I would be very interested in hearing from anyone who would like to work on similar topics (because they have this background in particular) and/or would have particular hindsights for ideas that could help.
Thanks!