This is too ad hoc, dividing three or four cause areas into two or three categories, to be a reliable explanation.
OK, sounds like the biggest issue is not the recognition algorithm itself (can be replicated or bought quickly) but the acquisition of databases of people’s identities (takes time and maybe consent earlier on). They can definitely come together, but otherwise, consider the possibilities (a) a city only uses face recognition for narrow cases like comparing video footage to a known suspect while not being able to do face-rec for the general population, and (b) a city has profiles and the ability to identify all its citizens for some other purpose but just doesn’t have the recognition algorithms (yet).
Well, I’m not trying to convince everyone that society needs a looser approach to AI. Just that this activism is dubious, unclear, plausibly harmful etc.
This need not be about ruthlessness directed right at your interlocutor, but rather towards a distant or ill-specified other.
I think it would be uncontroversial that a better approach is not to present yourself as authoritative, but instead present a conception of general authority in EA scholarship and consensus, and demand that it be recognized, engaged with, cited and so on.
Ruthless content drives higher exposure and awareness in the very first place.
There seems like an inadequate sticking rate of people who are just exposed to EA, consider for instance the high school awareness project.
Also, there seems like a shortage of new people who will gather other new people. When you just present the nice message, you just get a wave of people who may follow EA in their own right but don’t go out of their way to continue pushing it further. Because it was presented to them merely as part of their worldview rather than as part of their identity. (Consider whether the occasionally popular phrase “aspiring Effective Altruist” obstructs one from having an real EA identity.) How much movement growth is being done by people who joined in the recent few years compared to the early core?
I am also thinking of how there has been more back-and-forth about the optimizer’s curse, people saying it needs to be taken more seriously etc.
I don’t think that the prescriptive vs descriptive nature really changes things, descriptive philosophizing about methodology is arguably not as good as just telling EAs what to do differently and why.
I grant that #3 on this list is the rarest out of the 4. The established EA groups are generally doing fine here AFAIK. There is a CSER writeup on methodology here which is perfectly good: https://www.cser.ac.uk/resources/probabilities-methodologies-and-evidence-base-existential-risk-assessments-cccr2018/ it’s about a specific domain that they know, rather than EA stuff in general.
I’ve long preferred expressing EA as a moral obligation and support the main idea of that article.
Here’s some support for that claim which I didn’t write out.
There was a hypothesis called “risk homeostasis” where people always accept the same level of risk. E.g. it doesn’t matter that you give people seatbelts, because they will drive faster and faster until the probability of an accident is the same. This turned out to be wrong; for instance people did drive faster, but not so much faster as to meet or exceed the safety benefits. The idea of moral hazard from victory leading to too many extra wars strikes me as very similar to this. It’s a superficially attractive story that allows one to simplify the world and not have to think about complex tradeoffs as much. In both cases you are taking another agent and oversimplifying their motivations. The driver—just has a fixed risk constraint, and beyond that wants nothing but speed. The state—just wants to avoid bleeding too much, and beyond that threshold it wants nothing but foreign influence. But the driver has a complex utility function or maybe a more inconsistent set of goals about the relative value of more safety vs less safety, more speed vs less speed; therefore, when you give her some new capacities, she isn’t going to spend all of it on going faster. She’ll spend some on going faster, then some on being safer.
Likewise the state does not want to spend too much money, does not want to lose its allies and influence, does not want to face internal political turmoil, etc. When you give the state more capacities, it spends some of it on increasing bad conquests, but also spends some of it on winning good wars, on saving money, on stabilizing its domestic politics, and so on. The benefits of improved weaponry for the state are fungible, as it can e.g. spend less on the military while obtaining a comparable level of security.
Security dilemmas throw a wrench into this picture, because what improves security for one state harms the security of another. However in the ultimate theoretical case I feel that this just means that improvements in weaponry have neutral impact. Then in the real world, where some US goals are more positive sum in nature, the impacts of better weapons will be better than neutral.
Yes, the “slaughterbots” video produced by Stuart Russell and FLI presented a dystopian scenario about drones that could be swatted down with tennis rackets. Because the idea is that they would plaster to your head with an explosive.
Not like banning drones stops someone flying a drone from somewhere else.
Yes, but it means that on the rare occasion that you see a drone, you know it’s up to no good and then you will readily evade or shoot it down.
And political leaders sure you can speak behind the glass, but are you going to spend your whole life behind a screen?
No… but so what? I don’t travel in an armored limousine either. If someone really wants to kill me, they can.
More donations for movement growth: I would tentatively agree.
Okay, very well then. But if a polity wanted to do something really bad like ethnic cleansing, they would just allow facial recognition again, and get it easily from elsewhere. If a polity is liberal and free enough to keep facial recognition banned then they will not tolerate ethnic cleansing in the first place.
It’s like the Weimar Republic passing a law forbidding the use of Jewish Star armbands. Could provide a bit of beneficial inertia and norms, but not much besides that.
I’ve recently started experimenting with that, I think it’s good. And Twitter really is not as bad a website as people often think.
But who is talking about banning facial recognition itself? It is already too widespread and easy to replicate.
To be sure it is better than unfortified cereal (ceteris paribus), but they usually have a lot of refined grains + added sugar.
Sorry. This is it: https://forum.effectivealtruism.org/posts/MBJvDDw2sFGkFCA29/is-ea-growing-ea-growth-metrics-for-2018
If we had a cap-and-trade system then presumably it could allow for that (no idea if they actually do, in the few countries where cap-and-trade is implemented).
Reducing global poverty, and improving farming practices, lack philosophically attractive problems (for a consequentialist, at least) - yet EAs work heavily on them all the same. And climate change does have some philosophical issues with model parameters like discount rates. Admittedly, they are a little more messy and applied in nature than talking about formal agent behavior.
One reason might be because you’re only accounting for one year of the meat eater problem when I’ve accounted for a lifetime’s worth of impact (which I believe is the more complete counterfactual comparison).
I did that because I was only looking at one year of welfare improvement. One year for one year is simpler and more robust than comparing lifetimes. If you want to look at lifetimes, you have to scale up the welfare impacts as well.
I’ve met a great number of people in EA who disagree with utilitarianism and many people who aren’t particularly statistically minded. Of course it is not equal to the base rates of the population, but I don’t really see philosophically dissecting moderate differences as productive for the goal of increasing movement growth.
If you’re interested in ethnologies, sociology, case studies, etc—then consider how other movements have effectively overcome similar issues. For instance, the contemporary American progressive political movement is heavily driven by middle and upper class whites, and faces dissent from substantial portions of the racial minority and female identities. Yet it has been very effective in seizing institutions and public discourse surrounding race and gender issues. Have they accomplished this by critically interrogating themselves about their social appeal? No, they hid such doubts as they focused on hammering home their core message as strongly as possible.
If we want to assist movement growth, we need to take off our philosopher hats, and put on our marketer and politician hats. But you didn’t write this essay with the framing of “how to increase the uptake of EA among non-mathematical (etc) people” (which would have been very helpful); eschewing that in favor of normative philosophy was your implicit, subjective judgment of which questions are most worth asking and answering.
Why should I assume that this time is different?
The foundation of free trade is that it is mutually beneficial, since both parties agree to it.
With slavery, the slaves did not agree to be enslaved and transported. The enslavers used force and this allowed them to make other people worse off. Today, traded goods don’t include forced laborers, though you could include livestock in this category and I would actually be in favor of restricting that.
With opium, the story was more complicated. Users wanted opium, but it’s an addictive drug that damaged them and Chinese society in the long run. So the Chinese tried to restrict its import, but the British forcibly compelled them to lift the restrictions. Today, we don’t try to use military force to get other countries to accept harmful goods. We do exercise some leverage where we offer trade and finance deals to developing countries in exchange for them changing some of their economic policies; there is debate over this practice with some people arguing that we shouldn’t have these strings attached, but the countries are still willingly taking these deals so they are better than nothing.
I think you may find pro-free-trade people in favor of IP reform, these are rather separate issues. However I kind of doubt that many people of any stripe would want to remove IP rights entirely—that would eliminate the incentive to pursue research and development.