Researcher of causal models and human-aligned AI at FHI | https://twitter.com/ryancareyai
Tangentially related: I would love to see a book of career decision worked examples. Rather than 80k’s cases, which often read like biographies or testimonials, these would go deeper on the problem of choosing jobs and activities. They would present a person (real or hypothetical), along with a snapshot of their career plans and questions. Then, once the reader has formulated some thoughts, the book would outline what it would advise, what that might depend on, and what career outcomes occurred in similar cases.
A lot of fields are taught in a case-based fashion, including medicine, poker, ethics, and law. Often, a reader can make good decisions in problems they encounter by interpolating between cases, even when they would struggle to analyse these problems analytically. Some of my favourite books are similar, such as An Anthropologist on Mars by Oliver Sacks. It’s not always the most efficient way to learn, but it’s pretty fun.
It happens in Australian universities. Probably anywhere there’s a large centralised campus. Wouldn’t work as well in Oxbridge, though, because the teaching areas, and even the libraries, are spread all across the city.
Important topic. Though I find it hard to gauge the project without certain basic info:
In what ways is this actually a non-partisan effort (when the funding is going through ActBlue)?
How are you managing any risks, not limited to polarising EA politics, poisoning political relationships?
To what extent has the project been vetted by funders and experts working in adjacent areas?
It might be orthogonal to the point you’re making, but do we have much reason to think that the problem with old-CFAR was the content? Or that new-CFAR is effective?
Especially for referrals, since there may be very many.
Yeah, I haven’t analysed Holden’s intended meaning whatsoever, but something like what you describe would make much more sense.
It can’t be right to say that every descendant of a digital person is by definition also a person. A digital person could spawn (by programming, or by any other means) a bot that plays RPS randomly, in one line of code. Clearly not a person!
What about the hypothesis that simple animal brains haven’t been simulated because they’re hard to scan—we lack a functional map of the neurons—which ones promote or inhibit one another, and other such relations.
Agree that we shouldn’t expect large productivity/wellbeing changes. Perhaps a ~0.1SD improvement in wellbeing, and a single-digit improvement in productivity—small relative to effects on recruitment and retention.
I agree that it’s been good overall for EA to appear extremely charitable. It’s also had costs though: it sometimes encouraged self-neglect, portrayed EA as ‘holier than thou’, EA orgs as less productive, and EA roles as worse career moves than the private sector. Over time, as the movement has aged, professionalised, and solidified its funding base, it’s been beneficial to de-emphasise sacrifice, in order to place more emphasis on effectiveness. It better reflects what we’re currently doing, who we want to recruit, too. So long as we take care to project an image that is coherent, and not hypocritical, I don’t see a problem with accelerating the pivot. My hunch is that even apart from salaries, it would be good, and I’d be surprised if it was bad enough to be decisive for salaries.
This kind of ambivalent view of salary-increases is quite mainstream within EA, but as far as I can tell, a more optimistic view is warranted.
If 90% of engaged EAs were wholly unmotivated by money in the range of $50k-200k/yr, you’d expect >90% of EA software engineers, industry researchers, and consultants to be giving >50%, but much fewer do. You’d expect EAs to be nearly indifferent toward pay in job choice, but they’re not. You’d expect that when you increase EAs’ salaries, they’d just donate a large portion on to great tax-deductible charities, so >75% of the salary increase would be refunded on to other effective orgs. But when you say that the spending would be only a tenth as effective (rather than ~four-tenths), clearly you don’t.
Although some EAs are insensitive to money in this way, 90% seems too high. Rather, with doubled pay, I think you’d see some quality improvements from an increased applicant pool, and some improved workforce size (>10%) and retention. Some would buy themselves some productivity and happiness. And yes, some would donate. I don’t think you’d draw too many hard-to-detect “fake EAs”—we haven’t seen many so far. Rather, it seems more likely to help quality than hurt on the margin.
I don’t think the PR risk is so huge at <$250k/yr levels. Closest thing I can think of is commentary regarding folks at OpenAI, but it’s a bigger target, with higher pay. If the message gets out that EA employees are not bound to a vow of poverty, and are actually compensated for >10% of the good they’re doing, I’d argue that’s would enlarge and improve the recruitment pool on the margin.
(NB. As an EA worker, I’d stand to gain from increased salaries, as would many in this conversation. Although not for the next few years at least given the policies of my current (university) employer.)
I think they believe in Wei Dai’s UDT, or some variant of it, which is very close to Stuart’s anthropic decision theory, but you’d have to ask them which, if any, published or unpublished version they find most convincing.
One excerpt worth quoting (emphasis added):
Rob Wiblin: …if you were able to get even 10x leverage using science and policy by trying to help Americans, by like, improving U.S. economic policy, or doing scientific research that would help Americans, shouldn’t you then be able to blow Against Malaria Foundation out of the water by applying those same methods, like science and policy, in the developing world, to also help the world’s poorest people?
Alexander Berger: Let me give two reactions. One is I take that to be the conclusion of that post. I think the argument at the end of the post was like, “We’re hiring. We think we should be able to find better causes. Come help us.” And we did, in fact, hire a few people. And they have been doing a bunch of work on this over the last few years to try to find better causes...
The most relevant comments in the transcript seem to be in the section “GiveWell’s top charities are (increasingly) hard to beat”.
researching the legal environment in the state where the charity is registered and coming up with creative ways around local regulations or going through complicated registration procedures such as the ~ 1.5 year long one with the SEC are not things that I can automate… I’m taking two different perspectives in the comment based on the following steps: (1) What is realistic to realize now to get the idea off the ground, and (2) what is realistic to expect to happen in 5–10 years assuming that step 1 has succeeded… I don’t see a way to get [tax deductability] unfortunately… The legal risks I’m referring to are not simply that it might not be possible to get tax deductibility. It’s rather that in the worst case the responsible people at the charities may need to pay 8–9 digit settlements to the SEC or go to prison for up to five years for issuing unregistered securities.
VCs often manage to buy stakes in companies privately. Wouldn’t it be natural to sidestep that issue by copying what VCs do (and staying off the blockchain)? i.e. step (1) is privately traded patronage certificates, then step (2) is public ones? If so, then one could imagine a scenario where all you need for now is to do some research, and write up a pro forma contract?
Ah, my point here was more that an evil charity that is afraid that it’ll get shorted can decide not to offer the (say) 99% of its shares that it still holds for borrowing...
I can envisage a lot of ways to ensure some lending, so this seems like a small advantage.
I’m currently very concerned about prices not reflecting downside risks, and this mechanism is the only one that may be able to keep risky charities out...
Yes, having the ability to short companies is quite a weak method for punishing companies, because they can just stop selling patronage certs if they go negative. It would be better if we could get charities to pay for their negative impact somehow. An ``absolving″ certificate, of sorts. Maybe the people who would want to sell these ``absolving″ certificates are similar to the ones who look to buy ``patronage″...
My thinking has gone through the following steps: (1) I want to create charity shares. (2) Oops, charity shares are prohibitively difficult to do because of legal risks, effort, and hence very low chance of getting the buy-in from all the US- and UK-based EA charities. (3) So I need to come up with something that is almost as good but is more achievable: Project shares… and intervention shares...
Ahhhh, OK! I must say though, it rewards and punishes orgs for the performance of other orgs in their area. You portray this as a positive, but it seems like a big negative to me. It incentivises people to start new incompetent orgs in an intervention area, (or to keep incompetent orgs running) just because there are existing competent orgs. Conversely, it punishes competent orgs for the presence of harmful orgs implementing their same intervention. It’s quite messy to require an external panel to divide up the tokens between orgs. Frankly, given the fact that it’s a bit inelegant, I would bet that other problems will arise.
I can’t promise I’ll have much more to say in this thread, but in case I don’t, let me say that I’ve found this an illuminating discussion. Thanks!
Agree that discussing terminology is not yet useful in and of itself. Though I’m intending it for the purpose of idea clarification.
Re charity Vs intervention shares, my thinking was just that it would be more transparent for intervention shares to be constituted of charity shares, and for such shares to be issued by charities. Based on reading your comment, I’m not sure whether you agree?
As for your arguments: I find myself not so convinced by (1-2). I think the process of issuing charity shares could be automated for the charities. If desired, it seems not out of the question that these entities could even run as for-profits—given that you are proposing a revolution of the NGO sector, it seems weird to restrict yourself to the most common current legal setup (although I agree that tax deductability is nice to have).
I can see that (3) pushes weakly toward impact certs, but not strongly because ideally you also want to have specific markets, and the benefits of liquidity and specificity trade off against one another (in terms of the information that readers can gain). And even if resale markets are fairly dormant, I don’t think it’s a disaster—it should still be at least as good as the status quo (donations), and in many ways better (valuation is done retrospectively).
Re (4), why can’t charity shares be bought/sold? Re (5), what is the built-in mechanism?
Re past/future shares, on further thought, even if you only allow patronage certs to be sold for past events on the “bottom layer”, there are ways to route around this: you can sell shares in the company itself, or you can sell the rights to any future patronage shares. I’m certain this is a good thing, because it allows people to invest in orgs that will have large future impact, similar to investing in an org that you think will win an x-prize. The real question is just whether you should allow this “natively”, i.e. whether you should.be able to sell patronage of future activities. If you think of normal stocks, they do confer an ownership of the company into the indefinite future. Stocks can also have the problem where people make a company and make a bunch of promises about what it will do, sell it, then reneg on those promises—they call it securities fraud, and have a lot of defenses built up against it. If you want to piggyback on that, maybe you would want to only allow sale of past activities “natively”, and then for sale of future impact to be done only by sale of regular stocks in the company itself. That’s my initial instinct, although there may be a lot of other considerations.
It could only be billionaires who are running out of donation targets. If Bezos can buy WaPo, then less prominent billionaires can buy less popular media with much less (though not zero) controversy. But I agree that it only works well if you have EA-leaning talent to work there, especially at the executive level.
Agree with some of this, but:
The title should clarify that it’s “national scale” rather than scale generally that’s overrated.
US and China are probably more likely to copy their own respective states & provinces than copy the Nordics, right?
Being unusually homogenous, stable, and trusting might mean that some policies work in the Nordics, even if they don’t work elsewhere.
If we’re worried about whether govt pursues certain tech (like AI) safely over the coming 1-2 decades, then we should favour involvement in the executive over legislating, and the former can’t really transfer from the Nordics to the US. Diffusion may be rather slow.
I tend to agree. I think the main argument against is that some people at and around MIRI argue they’ve already (dis)solved it. I’d be interested to know to what extent people like Wei Dai, Stuart Armstrong, Paul Christiano agree. If you personally want collaborate on anthropics research, then there’s at least couple of people at FHI who may be interested. Feel free to send a DM!
Would be better to review less biased literature e.g. https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=review+of+challenge+prizes&btnG=
To begin with, let me just reiterate a terminological remark I made elsewhere, that might help a bit with conceptual clarity: the certificates don’t really denote an increment of impact. They denote being a “patron” for an impactful action. So they’re really patronage certificates (Or you can choose a similar name). If the buyer is an EA, it is still an impact purchase, because buyers are simply valuing the certificates based on their impact.
Now let me say something more novel. To decide what the impact certificates should be for, it seems like discreteness is a key desideratum. An organisation is relatively discrete, so it’s easier to say whether a charity did/didn’t do something, than to evaluate smaller objects (like a project) or larger objects, like intervention areas. Instinctively, I’d think that intervention shares are a non-starter. Because it’s so unclear who is allowed to sell them. It would seem to me for an intervention share to be built out of charity shares, similar to how an ETF is built out of stocks, or how a mortgage-backed security is built out of home loans. Out of the other two, I don’t have as strong of an opinion.
If you do “charity shares”, then you’d probably want to sell shares corresponding to activities that are restricted to a particular year, or at least those that have already happened in the past. Otherwise, the charity could just sell shares corresponding to large projected future impacts, and then shut down. Once you sell the charity shares, the buyers would need to be able to split those shares up, and sell only that portion of the patronage corresponding to their preferred projects.
If you do “project shares”, then there’s a bit more overhead for the charity, in selling these separately, but then the buyers can just buy their favourite projects directly. Or if they want to buy patronage for all the charity’s activities, they can buy a full set, or bundle them together.
So not sure what’s better.