FHI has shut down yesterday: https://www.futureofhumanityinstitute.org/
harfe
This reply is disappointingly short and again does not address the core question raised by Shakeel.
The letter of intent states that the grant was approved. Why doesn’t FLI do more due diligence before approving a grant? Since you havent stated the opposite, I assume that the latter is genuine (that would be nice to clarify, too). Is that a usual process? How often, by percentage, does it happen that you approve a grant and then later reject it?
Has anyone at FLI looked at the swedish wikipedia page of the org?
If you see the wikipedia page now, do you think, as a first guess, it would be ok to give $100k to such an organization?
Where you aware of Nya Dagbladet before?
Were you aware that Nya Dagbladet publishes horrific, racist content, or do you disagree with the characterization that they publish horrific, racist content?
what kind of media project was it that you initially wanted to fund?
- 16 Jan 2023 1:34 UTC; 2 points) 's comment on [Linkpost] FLI alleged to have offered funding to far right foundation by (
It sounds like you think that the other 19 employees of nonlinear had the same arrangement (travel with them and be paid $12k/year). I doubt this is true. Probably many of the 19 are being remotely employed.
They got to pocket $12k/year into savings and live like a king.
Many people spend money besides rent+food+travel, so this sounds exaggerated.
It is inevitable that the EA brand will now be associated with irrational feminized college students rather than interesting quantitative thinkers willing to bite socially undesirable bullets. In many ways this association will be correct in practice, because we are outnumbered.
I dislike this part, especially the phrase “irrational feminized college students”.
A reason that is missing from the “contra” list: You could stay at a higher salary and donate the difference to a more cost-effective org than the one you work for.
I would expect that most people who work in EA do not work for the org that they consider to have the highest marginal impact for an additional dollar (although certainly some do).
Accepting a lower salary can be more tax-efficient than donating if the donation is not tax-deductible. But if you think that cost-effectiveness follows a power law, then its quite possible that there is an org is more than twice as cost-effectiveness than your current employer.
A relevant (imo) piece of information not in this post: The EA forum post that you are talking about was down-voted a lot. (I have down-voted too, although I don’t remember why I did so at the time.)
This makes me less worried than I otherwise would have been.
edit: I did not see Jason’s comment prior to posting mine, sorry for duplicate information.
I actually meant it as a compliment, thanks for pointing out that it can be received differently. I liked this “quick take” and believe it would have been a high-quality post.
I was not aware that my comment would reduce the number of quick takes and posts, but I feel deleting my comment now just because of the downvotes would also be weird. So, if anyone reads this and felt discouraged by the above, I hope you rather post your things somewhere rather than not at all.
If someone could get Hinton to sit and talk 3+ hours with Paul Christiano or other experienced alignment researchers that could be really valuable.
I find it a bit irritating and slightly misleading that this post lists several authors, (some of them very famous in EA), who have not actually written the submission. May I suggest to only list one account (eg ketanrama) as the author of the post?
Since this is tagged “Existential risk”: What does this have to do with existential risk? Or is it not supposed to be about existential risk, not even indirectly? As far as I can tell, the article does not talk about existential risk. I could make my own guesses and association of this topic with existential risk, but I would prefer if this is spelled out.
You might be interested to know that they wrote an EA Forum post. The post seems to be quite long and has probably not been read fully by many people. They also mostly ignore immigration. I consider https://astralcodexten.substack.com/p/slightly-against-underpopulation to be generally a good rebuttal.
some related discussion: https://www.lesswrong.com/posts/TYTEJxzeK3jBMq2TZ/your-posts-should-be-on-arxiv and the comments on that post
I think whether a post would be a good fit for an academic journal depends a lot on the concrete article, and might be not worthwhile for some. Maybe you can encourage authors of specific posts directly and point them towards a fitting academic journal?
In general, doing small-scale experiments seems like a good idea. However, in this case, there are potentially large costs even to small-scale experiments, if the small-scale experiment already attempts to tackle the boundary-drawing question.
If we decide on rules and boundaries, who has voting rights (or participate in sortition) and who not, it has the potential to create lots of drama and politics (eg discussions whether we should exclude right-wing people, whether SBF should have voting rights if he is in prison, whether we should exclude AI capability people, which organizations count as EA orgs, etc.). Especially if there is “constant monitoring & evaluation”. And it would lead to more centralization and bureaucracy.
And I think its likely that such rules would be understood as EA membership, where you are either EA and have voting rights, or you are not EA and do not have voting rights. At least for “EAG acceptance”, people generally understand that this does not constitute EA membership.
I think it would be probably bad if we had anything like an official EA membership.
The UK seems to take the existential risk from AI much more seriously than I would have expected a year ago. To me, this seems very important for the survival of our species, and seems well worth a few negative articles.
I’ll note that I stopped reading the linked article after “Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs.” This is inaccurate imo. In general, having low-quality negative articles written about EA will be hard to avoid, no matter if you do “narrow EA” or “global EA”.
Geoffrey Hinton
According to Wikipedia, “Regarding existential risk from artificial intelligence, Hinton typically declines to make predictions more than five years into the future”. So it seems plausible that he is not really interested in AI timelines and forecasting. If this is the case, then I think having Ajeya Cotra write the report is preferable to Geoffrey Hinton.
As a more general point, it is not clear whether good experts in AI are good experts in AGI forecasting. The field of AI is different here from climate change, in that climate change science deals inherently a lot more with forecasting.
Does having good intuitions which neural network architectures make it easier to tell traffic lights apart from dogs in pictures help you assess, whether the orthogonality thesis is true or relevant? Does inventing Boltzmann machines help you decide whether it is plausible that an AGI build in 2042 has mesaoptimization? Probably at least a little bit, but its not clear how far this will go.
This is not really a disagreement but rather nitpicking, but I noticed that according to https://intelligence.org/topcontributors/ MIRI did receive a donation from Alameda Research. Not a large one, but some money from the SBF ecosystem arrived at MIRI, apparently. But this does not really contradict the speculations you make about SBF avoiding certain Bay Area people.
But absolutely, and yet a big part of EAs seem to be pro-Altman!
What makes you think a big part of EAs are pro-Altman? My impressions is that this is not true, and I cannot come up with any concrete example.
some thoughts on different mechanisms:
Quadratic voting:
I think this could be fun. An advantage here is that voters have to think about the relative value of different charities, rather than just deciding which are better or worse. This could also be an important aspect when we want people to discuss how they plan to vote/how others should vote. If you want to be explicit about this, you could also consider designing the user interface so that users enter these relative differences of charities directly (e.g. “I vote charity A to be 3 times as good as charity B” rather than “I assign 90 vote credits to charity A and 10 vote credits to charity B”). Note however, that due to the top-3 cutoff, putting in the true relative differences between charities might not be the optimal policy.
A technical remark: If you want only to do payouts for the top three candidates, instead of just relying on the final vote, I think it would be better to rescale the voting credits of each voter after kicking out the charity with the least votes and then repeating the process until there are only 3 charities left. This would reduce tactical voting and would respect voters more who pick unusual charities as their top choices. This process has some similarities with ranked-choice voting. Additionally, users should have the ability to enter large relative differences (or very tiny votes like 1 in a billion), so their votes are still meaningful even after many eliminations.
Approval voting:
I think voting either “approve” or “disapprove” does not match how EAs think about charities. I generally approve a lot of charities within EA space, but would not vote “approve” for these charities.
I worry that a lot of tactical voting can take place here, especially if people can see the current votes or the pre-votes. For example, a person who both approves of the 3rd-placed charity and the 4th-placed charity (by overall popularity), might want to switch their vote to “disapprove” for the (according to them) worse charity. For example, voters are incentivized to give different votes to the 3rd-placed and 4th-placed charity, because there the difference will have the biggest impact on money paid out. Or a person who disapproves of all the top charities might switch a vote from “disapprove” to “approve” so that their vote matters at all.
Ranked-choice voting:
I am assuming here that the elimination process in ranked-choice stops once you reach the top 3 and that votes are then distributed proportionally. I think this would be a good implementation choice (mostly because proportional voting itself would be a decent choice by itself, so doing it for the top 3 seems reasonable). Ranking charities could be more satisfying for voters than having to figure out where to draw the line between “approve” and “disapprove”, or putting in lots of numeric values.
Generally, ranked-choice voting seems like an ok choice.
how well will these allocate funds?:
I am quite unsure here, and finding a best charity based on expressed preferences of lots of people with lots of opinions will be difficult in any case. My best guess here is that ranked-choice voting > quadratic voting > approval voting. A disadvantage of quadratic voting here is that it can happen that some fraction of the money will be paid out to sub-optimal charities (even if everyone agrees that charity C is worse than A and B, then it will likely still be rational for voters to assign non-zero weight to charity C, corresponding to non-zero payout).
understandability:
I think approval voting is easier to understand than ranked-choice voting, which is easier to understand than quadratic voting. This is both for the user interface and for understanding the whole system. Also, the mental effort for making a voting decision is less under ranked-choice and approval voting. I think the precise effects of the voters choices will be difficult to estimate in any system, so keeping
general remarks:
Different voting mechanisms can be useful for different purposes, and paying 3 charities different amounts of money is a different use case than selecting a single president, so not all considerations and analyses of different voting mechanisms will carry over to our particular case. The top-3 rule will incentivize tactical voting in all these systems (whereas in a purely proportional system there would be no tactical voting). Maybe this number should be increased a bit (especially if we use quadratic voting). If there are lots of charities to choose from, it will be quite an effort to evaluate all these charities. Potentially, you could give each voter a small number of charities to compare with each other, and then aggregate the result somehow (although that would be complicated and would change the character of the election). Or there can be two phases of voting, where the first phase narrows it down to 3-5 charities and then the second phase determines the proportions.
My personal preferences:
Obviously, we should have a meta-vote to select the three top voting methods among user-suggested voting methods and then hold three elections with the respective voting methods, each determining how a fraction of the fund (proportional to the vote that the voting method received in the meta-vote) gets distributed. And as for the voting method for this meta-vote, we should use… ok, this meta-voting stuff was not meant entirely seriously.
In my current personal judgement, I prefer quadratic voting over ranked-choice and ranked-choice over approval voting. I might be biased here towards more complex systems. I think an important factor is also that I might like more data about my preferences as a voter: With quadratic voting, I can express my relative preferences between charities quantitatively. With ranked-choice voting, I can rank charities, but cannot say by how much I prefer one charity over another. With approval voting, I can put charities in only two categories.
I think there are problems with this approach.
(Epistemic status: I have only read parts of the article and skimmed other parts.)
The fundamental thing I am confused about is that the article seems to frequently use probabilities of probabilities (without collapsing these probabilities). In my worldview, probabilities of probabilites are not a meaningful concept, because they immediately collapse. Let me explain what I mean by that:
If you assign 40% probability to the statement “there is a 70% probability that Biden will be reelected” and 60% probability to the statment “there is a 45% probability that Biden will be reelected”, then you have a 55% probability that Biden will be reelected (because 0.40.7 + 0.60.45 = 0.55). Probabilities of probabilities can be intermediate steps, but they collapse into single probabilities.
There is one case where this issue directly influences the headline result of 1.6%. You report intermediate results such as “There is a 13.04% chance we live in a world with low risk from 3% to 7%” (irrellevant side remark: In the context of xrisk, I would consider 5% as very high, not low), or “There is 7.6% chance that the we live in a world with >35% probability of extinction”. The latter alone should set a lower bound of 2.66% (0.076 * 0.35 = 0.0266) for the probability of extinction! Taking the geometric mean in this instance seems wrong to me, and the mathematically correct thing would be to take the mean for aggregating the probabilities.
I have not read the SDO paper in detail, but I have doubts that the SDO method applies to the present scenario/model of xrisk. You quote Scott Alexander:
Imagine we knew God flipped a coin. If it came up heads, He made 10 billion alien civilization. If it came up tails, He made none besides Earth. Using our one parameter [equation], we determine that on average there should be 5 billion alien civilizations. Since we see zero, that’s quite the paradox, isn’t it?
No. In this case the mean is meaningless. It’s not at all surprising that we see zero alien civilizations, it just means the coin must have landed tails.
I note that this quote fits perfectly fine for analysing the supposed Fermi Paradox, but it fits badly whenever you have uncertainty over probabilities. If gods flips a coin whether we have 3% or 33% probability of extinction, the result is 18%, and taking the mean is perfectly fine.
I would like to ask the author:
-
What are your probabilites to the questions from the survey?
-
What is the product of these probabilities?
-
Do you agree that multiplying these conditional probabilities is correct under the model or at least a lower bound of the probability of AGI existential catastrophe? Do you agree with the mathematical inequality ?
-
Is the result from 2. equal approximately equal to 1.6%, or below 3%?
I think if the author accepts 2. + 3. + 4. (which I think they will), they have to give probabilities that are significantly lower than those of many survey respondents.
I do conceed that there is a empirical question whether it is better to aggregate survey results about probabilities using the arithmetic mean or the geometric mean, where the geometric mean would lead to lower results (closer in line with parts of this analysis) in certain models.
TLDR: I believe the author takes gometric means of probabilites when they should take the arithmetic mean.
-
Consider donating all or most of your Mana on Manifold to charity before May 1.
Manifold is making multiple changes to the way Manifold works. You can read their announcement here. The main reason for donating now is that Mana will be devalued from the current 1 USD:100 Mana to 1 USD:1000 Mana on May 1. Thankfully, the 10k USD/month charity cap will not be in place until then.
Also this part might be relevant for people with large positions they want to sell now: