I am in favour of people asking for requests, including money. Even if these post are not of interest to most readers, I think they can be of grate value when read by the right person, but the chances of that goes down dramatically if the post are not on the front-page. On the other hand, we don’t want the front-page to be filled up with various requests. It takes up space, and also don’t look very good. But I do think there is a simple win win here.Create a top level post called something like “Requests for funding and other favours”, where people can leave their requests as comments. This will only take up a single line on the front-page, and it will more accessible for the people who are looking to donate.
Then maybe these lots of people should gang up and start a new hub, literally anywhere else. Funding problem mostly solved.
If people are not seriously trying this, then it’s hard for me to take seriously any claims of lack of funding. But as I said, I might be missing something. If so, pleas tell me.
You are correct that people in the Bay can find out about project in other places. The project I know about are also not in the same location as me. I don’t expect being in the Bay has an advantage for finding out about projects in other places, but I could be wrong.
When it comes to project in the Bay, I would not expect people who lack funding to be there in the first place, given that it is ridiculously expensive. But I might be missing something? I have not investigated the details, since I’m not allowed to just move their my self, even if I could afford it. (Visa reason, I’m Swedish)
Looking for more projects like these
AI Safety Support is looking for both funding and fiscal sponsorship. We have two donation pledges which are conditional on the donations being tax-deductible (one from Canada and one from the US). But even if we solve that, we still have a bit more room for funding.
The money will primarily be used for sallary for me and JJ Hepburn.
AI Safety Support’s mission is to help aspiring and early career AI Safety researcher in any way we can. There are currently lots of people who wants to help with this problem but who don’t have the social and institutional support from organisations and people around them.
We are currently running monthly online AI Safety discussion days, where people can share and discuss their research ideas, independent of their location. These events are intended as a complement to the Alignment Forum and other written forms of publication. We believe that live talks conversation are a better way to share early stage ideas, and that blogpost and papers comes later in the proses.
We also have other projects in the pipe line, e.g. our AI Safety career bottleneck survey. However, these things are currently on hold until we’ve secured enough funding so that we know we will be able to keep going for at least one year (to start with).
AI Safety Support have only existed since May, but both of us have a track record of organising similar events in the past, e.g. AI Safety Camps.
I have come to believe that living and working in the EA/Rationality community in the Bay Area made it much more likely I would hear about attractive opportunities that weren’t yet funded by larger donors
I am sceptical about this. There are *lots* of non Bay-area projects and my impression (low confidence) is that it is harder for us to get funding. This is becasue even the official funding runs mostly on contacts, so they also mostly fund stuff in the hubs.
I know of two EA projects (not including my own) which I think should be funded, and I live in Sweden.
1) You will hear about 10-50 EA projects looking for funding, over the next 2 months (80%).
2) >70% of these projects will not be a registered tax-deductible charities (but might be able to get fiscal sponsorship). (80%)
Becomming a registered charity is a lot of work. It would be interesting for someone to look into when it is and isn’t worth the time investment.
I did some googling.
In UK there are 4 ways to get a PhD (according to this website) and only one of them is the traditional PhD program.
Here is a discussion on independents PhDs. People are disagreeing on weather it is possible to do a PhD with out a supervisor, pointing towards different practices in different countries.
Several people claim that “The PhD process is about learning, not just publishing.”, but my impression is that this is a very modern idea. A PhD used to be about proving your capability, not monitoring your learning process.
I have noticed based on my search that nearly 60% of research roles in think-tanks in Europe have PhDs.
So almost half of them don’t. If you want a job at one of those think tanks, I would strongly recommend that you just go straight for that.
If you want to do research, then do the research you want to do. If the research you want to do mainly happen at a company or think thank, but not really in academia, go for the company or think tank.
There are other ways of getting a PhD degree that does not involve enrolling in a PhD program. In many countries, the only thing that actually matters for getting the degree is to write an defend a PhD thesis which should contain original research done by you. For example if you just keep publishing in academic journals, until your body of work is about the same as can be expected to be done during a PhD (or maybe some more to be on the safe side), you can just put it all in a book, approach a university and ask to defend your work.
This may be different in different countries. But universities mostly accept foreign students. So if you can’t defend your independent thesis at home, go some where else.
Some of the questions of the checklist, I would endorse more as guidelines, or warning signs than as strict rules.
Is there a substantial amount of literature in your field?
Was there a major discovery in the field in recent years?
Both those questions measure how much you can learn from others in academia. If you can’t take advantage of collogues, then going in to academia at all (even if you don’t intend to stay) will be lower value. So you might be more productive elsewhere.
The first one also says something about how easy/hard it will be to publish and generally get recognised. If you do something non-established, you will have a much harder time.
But there are two main reason you might want to step into academia anyway.
1) To influence other academics. (I think this is the main reason FLI chooses to be an academic institution.)
2) To get paid. (In cases where there are no other options.)
Do you want a career in academia?
Is there a better option for prospective PhD students who want a career in research outside of academia?
Lot’s of places out side academia does research. Companies, non-profits, think tanks, independent AI Safety researchers with Long Term Future Fund grants.
What is the better option depends on what research you want to do. The more abstract the more likely academia is a good choice. The more concrete the more likely it is not. E.g. charity evaluation is a type of research that I don’t think would do well in academia (though this is not my field at all, so I might be wrong).
Sort of, and it might take some time. The short of it is that I’m less enthusiastic about impact purchase.
I want some sort of funding system that is flexible, and I think the best way to do this is to sponsor people and not project. If someone has though their past work shown competence and good judgement, I think they should be given a salary and freedom to do what they think is best.
I though the way to active this was impact purchase, but as someone pointed out in a comment, this makes for a very economical uncertain situation for the people living this way, which causes stress and short-sightedness which is not the best.
When I wrote this post, I assumed that I needed to have a plan to get a grant in the current system. But after talking to one of the fund manager of Long Term Future Fund, I found out that it is possible to get a grant by simply producing a track record and some vague plan to do more of the same. I’ve decided to try this out for my self. I’m waiting for an answer from Long Term Future Fund, and plan to write some update after I know how that goes.
If I get the grant this would prove that is is at least possible to get funding with out a clear plan. If I get rejected the concussions I take from that depends on what feedback I get with my rejection. Either way I decided to wait and see how the grant applications goes, before writing the follow up.
Putting this suggestion out there, because there are always people looking for AI Safety career advise, and this is a tried and tested format.
First round, everyone shares their career plans (or lack of plans).
Second round everyone who wants to shares career advise that they think might be helpful for others in the circle.
Must be late session if you want me to lead it.
I want to listen to this podcast!
Watching it yet again, I think it would feel more right if the guy where not so easily convinced, but instead it ended with him, being “hm, that sounds promising, I’m going to learn some more”.
Both the puppet really felt like real people with actual personalty to me, up until t=1:57. But then the guy just complexly changes his mind which broke my suspense of disbelief. I think that’s the point when mostly started to sound like “yet another commercial”.
The format of the video is basically: “Do you worry about these things, then we have the solution.” Integrated with some back and forth, that I really like.
“Do you worry about these things, then we have the solution.” is a standard panther in commercials, for a good reason. I think this is a good panther also for selling idea ideas like EA. But it also means that you can just say you understand my concerns and that you have solutions, you have to give me some evidence, or else is is just another empty commercial.
The person singing about their doubts felt relatable, in that they brought up real concerns about charity that I could imagine having before EA. I don’t remember exactly but these seemed like standard and very reasonable concerns. And got the impression that you (the video maker) really understand “my” (the viewers) worries about giving to charity.
But when you where singing about the solutions you fall a bit short. I don’t think this video would win the trust of an alternative Linda, that your suggestions for charity is actually better. I think it would help to put in some argument why treatable decides, and how to lift the barriers you mention.
Every charity says they are special, so just it don’t count for much. But if you give me some arguments that I can understand for why your way is better, then that is evidence that you’re onto something, and I might go and check it out some more.
All that said, I re-wathced the video, and I like it even more now. The energy and the mood shifts are amazing.
On re-watching I also feel that a viewer should be able to easily figure out the connection between focusing on deceases and avoiding building dependency. But I remember that first time I watched is it felt like there where a major step missing link there. I think it is now when I know what they will say, this gives me some more time to reflect and make those connections myself.
But people seeing this on the internet might only watch once, so...
I very much enjoyed the video. But I don’t think it would have been able to change my mind in some alternative reality where I didn’t already know about EA.
Some more additions:
I) I found out what happened to impactpurchase.org
Paul Christiano (from privet email, with the permission to quote):
Basically just a lack of time, and a desire to focus on my core projects. I’d be supportive of other people making impact purchases or similar efforts work, I hope our foray into the space doesn’t discourage anyone.
II) Justin Shovelain told me (and gave me permission to share this information) that he would probably have focused more on Coronavirus stuff early on, if he though there where a way to get paid for this work.
This is another type of situation where grants are too slow.
I have changed my mind quite a bit since writing this blogpost. The updates are coming from the discussions with you in the comments, so thanks for everyone discussing with me.
Everything in this comment are still work in progress. I’ll write something more formal and well though through later, when I have a more stable opinion. But my views have already change enough so that I wanted to add this update.
What I actually want there to be is some sort of trust based funding. If I proven my self enough (e.g. by doing good work) then I get money, and no questions asked. The reason I want this is becasue of flexibility (see main post).
Giving away money = Giving away power
Impact perches has the neat structure that if I done X amount of good I get X amount worth of trust (i.e. money). This seems to be the exact right amount, because it is the most you can give away and still be protected from exploitation. If someone who are not aligned with the goal of the funder tires to use impact purchase as a money pump, they still have to do an amount of good equal to the payout they want.
A project to project lifestyle doesn’t seem conducive to focusing on impact.
We actually know this form an other field. In most of academia, the law of the land is publish or perish. Someone living of impact purchases will face a similar situation, and it is not good, at least not in the long run.
I think the high impact projects are often very risky, and will most likely have low impact.
To the extent that this is true, impact purchase will not work.
In theory we could have impact investors, who funds a risky project and earn money by selling the impact of the few projects which impact reached the stars (literally and/or figuratively). But this requires an other layer which may or may not happen in reality (probably won’t happen). Also, from the perspective of the applicant, how is this any different from applying for a grant? So what have we gained?
If not impact purchase, then what?
I still would like to solve the problem of inflexibility that grants have. An actually I think the solutions already exist (to some extent).
1) Get a paid job, with high autonomy.
2) Start an organisation and fundraise. I did not think of this until now, but when orgs fundraise, they typically don’t present a plan for what they will do with the money. They mainly point towards what they have done so far, and ask for continued trust.
3) …? I’d be very interested in other suggestions. I would not be surprised if there are other obvious things I have missed.
There are also other solutions that don’t exist yet (or not very much) in EA, but could be implement by any institution or person with spare money:
a) “Trusted person”-job: A generic employment you offer to anyone who you like to keep up the good work, or something like that.
b) Support people on Ko-fi or Patreon, or similar, and generally encourage this behaviour from others too. (I know this is happening already, but not enough for people to make a living.)
I’m ok with hit based impact. I just disagree about events.
I think you are correct about this for some work, but not for others. Things like operations and personal assistant are multipliers, which can consistently increase the productivity of those who are served.
Events that are focused on sharing information and networking fall in this category. People in a small field will get to know each other and each others work eventually, but if there are more events it will happen sooner, which I model as an incremental improvement.
But some other events feels much more hits based not that I think of it. Anything focused on getting people started (e.g. helping them choose the right career) or events focused on ideation.
But there are other types of event that are more hit based, and I notice that I’m less interested in doing them. This is interesting. Because these events also differ in other ways, there are alternative explanations. But seems worth looking at.
Thanks for providing the links, I should read them.
(Of course everything relating to X-risk is all or nothing in therms of impact, but we can’t measure and reward that until it does not matter anyway. Therefore in terms of AI Safety I would measure success in terms of research output, which can be shifted incrementally.)