I’m a researcher at London School of Economics and Political Science, working in the intersection of moral psychology and philosophy.
As far as I understand, text to speech browser extensions do exist, but I haven’t tested their quality relative to this one.
Extensions Read Aloud: A Text to Speech Voice ReaderRead aloud the current web-page article with one click, using text to speech (TTS). Supports 40+ languages.Read Aloud uses text-to-speech (TTS) technology to convert webpage text to audio. It works on a variety of websites, including news sites, blogs, fan fiction, publications, textbooks, school and class websites, and online university course materials.
Extensions Read Aloud: A Text to Speech Voice Reader
Read aloud the current web-page article with one click, using text to speech (TTS). Supports 40+ languages.
Read Aloud uses text-to-speech (TTS) technology to convert webpage text to audio. It works on a variety of websites, including news sites, blogs, fan fiction, publications, textbooks, school and class websites, and online university course materials.
Some of the interventions don’t have to do with changing societal values/culture/ habits though; e.g. those falling under hardware/medical.
But maybe you think they’ll take time, and that we don’t have enough time to work on them either.
I think the most interesting point (or at least fun to write about on an internet forum) hasn’t been made, and it’s not that EAs can perform better running a general fund, but that there is strong niche for an EA fund that helps EAs build EA companies.
I related to this niche issue in two comments; 1, 2.
The Swedish utilitarian philosopher Torbjörn Tännsjö has argued for that view (in Swedish; paywall). “We should embrace a future where the universe is populated by blissful robots”. I’m sure others have as well.
Alright, but if there were such EA VCs they might want to keep an extra eye on EA start-ups, because of special insider knowledge, mutual trust, etc. Plus EAs may be underestimated, as per above.
I do agree, however, that unpromising EA start-ups shouldn’t be funded just because they’re EAs.
You don’t have to believe that VCs are generally irrational in order to believe that an EA VC could be a good idea. I think arguing against the claim that VCs are generally irrational is akin to a weak man argument.
People presumably start successful venture capitalist firms, e.g. based on niche competencies or niche insights, now and then. It’s not the case that new venture capital firms never succeed. And to determine whether an EA venture capital firm could succeed, you’d have to look into the nitty-gritty details, rather than raising general considerations.
Also, in a sense VC is just another industry, alongside, e.g. crypto, remittances, etc. The premise of the OP is that EAs companies could make become very successful. And if so those could a priori include VCs. You’d need some special argument as for why EA VCs are less likely to succeed than EA companies in some other industries.
One possibility is that EAs are better than it might seem at first glance. The fact that there is some track-record of EA start-up success (as per the OP) may be some evidence of that.
If that is the case, then VCs may underestimate EA start-ups even if VCs are generally decent—and EA companies may also be a good investment (cf. your second paragraph).
I guess a relevant factor here is to what extent successful EA start-ups have been funded by EA vs non-EA sources.
I liked this comment.
Another way to see it is that there are two different sorts of arguments for prioritising existential risk reduction—an empirical argument (the risk is large) and a philosophical/ethical argument (even small risks are hugely harmful in expectation, because of the implications for future generations). (Of course this is a bit schematic, but I think the distinction may still be useful.)
I guess the fact that EA is a quite philosophical movement may be a reason why there’s been a substantial (but by no means exclusive) focus on the philosophical argument. It’s also easier to convey quickly, whereas the empirical argument requires much more time.
To actually achieve the goals of longtermism it seems like MUCH more work needs to be happening in translational research to communicate academic x-risk work into policymakers’ language for instrumental ends, not necessarily in strictly ‘correct’ ways.
This sentence wasn’t quite clear to me.
Yes, this is my impression as well, based on recently having booked a day 2 test.
Also, one may want to check the reliability of the providers via rating sites (one cheap one I looked at had a terrible rating).
However, one should also note that the rules are about to change:
From 24 October fully vaccinated passengers and most under 18s arriving in England from countries not on the red list can take a cheaper lateral flow test, on or before day 2 of their arrival into the UK. These can be booked from 22 October.
Yeah. Could be good to study EA success in different areas more systematically, as we get more empirical data.
It’s also been used outside of genetics by others. I find the EA usage unproblematic.
Yes, I think it’s stronger evidence of EAs being good at making a lot of money (or of it being easier than expected to make a lot of money) than of EAs being super talented in general (though it’s some evidence of that as well).
I don’t know much about Wave, but to me it seems like a data point, even though smaller (meaning there isn’t just one case).
Afaict there is a difference between the Long Reflection and Hanson’s discussion about brain emulations, in that Hanson focuses more on prediction, whereas the debate on the Long Reflection is more normative (ought it to happen?).
Neoliberals tend to talk about issues that many people take an interest in to a greater extent than EAs do. I would guess that that’s an important part of the explanation of the Neoliberals’ greater success on Twitter.
There seems to be some premise missing in this argument.
To me, it seems that the question whether professional distance is good or not is mostly orthogonal to the question why EA isn’t a solved problem already.
I agree that it would be good to describe this distinction in the Wiki. Possibly it could be part of the Epistemic deference entry, though I don’t have a strong view on that.
Yes, agreed.You can hide tags, like the creative writing contest, from the frontpage, but if you scroll down those posts and their comments are visible (at least they do to me; maybe there is some way to hide them). It would be good if they could be entirely hidden.
And yes, it would be good to be able to hide individual posts (along with their comments) as well.
I’m not an expert, but to me it seems like cryptocurrencies have received a lot of attention within the EA community. E.g. see Sam Bankman-Fried.