+1 this. Hate FB. EA is the only reason I semi-regularly think about returning.
Randomized, Controlled
@Richard_Batty, I’d be interested finding out more about what’s needed for EA-SE proposal. I’ll also shoot Oliver a email about this.
I’ve so far only looked at sections 5 and 6, because those were the most immediately interesting.
I think the critique of the Wild Animal Suffering research is very much on target. I’ve always thought that at best, WAS work should be relegated to basic questions that can be tackled in biology or ecology.
All of the WAS interventions I’ve seen discussed seem deeply wacky, misguided and likely to be radioactive for the movement.
Nope, the Moon has none of the resources required for sustaining a spacefaring civilization, except sunlight and water.
Well. This might be a bit of an over-statement—we don’t really have a good idea of what’s up there. There is good evidence for Titanium and there may be Platnium Group metals up there. Who knows what else?
The moon, mars, or colonies inside hollowed out asteroids certainly doesn’t make sense as x-risk mitigation in the near or medium term, but at some point they’re going to be necessary.
“The thing that really gets me is that Democrats try to offer policies (paid sick leave! minimum wage!) that would help the working class,” a friend just wrote me. A few days’ paid leave ain’t gonna support a family. Neither is minimum wage. WWC men aren’t interested in working at McDonald’s for $15 per hour instead of $9.50. What they want is what my father-in-law had: steady, stable, full-time jobs that deliver a solid middle-class life to the 75% of Americans who don’t have a college degree. Trump promises that. I doubt he’ll deliver, but at least he understands what they need.
@rowborg/@kbog, this is a great thread, and I’m only going to dip my toe in, here, minimally, to +1 this point that rowborg has made. Employment isn’t just about exchanging labour for a wage, it’s a major source of meaning in people’s lives. The assertion that standards of living in the US have risen is probably correct, even for people at the bottom of the income distribution. But this standard of living increase can’t explain the incredibly sour mood of a huge chunk of people. Obviously it’s going to take a lot of time for researchers to unpack that, but the meaningless-employment story seems like a pretty plausible component: if people feel what they do doesn’t have any meaning, and particularly if they feel their children don’t have prospects for meaning-ful employment then despair and anger seem like totally to-be-expected reactions.
Normally I would not double-post an item, but I’d like to increase the chance people see this, and I don’t know if it warrants a front-page posting.
EA Toronto
I just created the Effective Altruism Toronto meetup. I’m already in touch with the organizers of LW Toronto. My goal is to reach a monthly meeting tempo over the next 3 − 6 months with a small core of regulars, and then reach a twice-monthly tempo.
Please spread the word to anyone who might be in the GTA!
Promotion help
If anybody has suggestions about how to best promote/spread the word, that would be super-great. I’m one of those tin-foil hat Facebook holdouts, but I’m willing to blow the dust off my account to do some promo for this. Pointers to FB groups/highly-connected-individuals/whatever, as well as non-FB related ideas would be really appreciated.
Inspired by this, I just created the Effective Altruism Toronto meetup. I’m already in touch with the organizers of LW Toronto. Please spread the word to anyone who might be in the GTA!
Also interested, would prefer something not facebook-based. If something needed to be setup/maintained/whatnot, I’d be happy to help.
I’m going to be in SF/Berkeley from the Dec 26th to Jan 4. If anybody knows of any interesting meetups/events/groups/friendly people worth meeting/checking out, I would be hyper interested. Thanks.
To expand on my last point: my understanding of effective altruism is that it is expansive. Generous. About becoming “more the people we wished we were”. I do not see it as a movement that ridicules or comes from schadenfreude or is punitive. The AM hack is the result of horribly unethical business and software practices, and its fallout is causing a lot of suffering. That’s why I think it’s bad for EA’s image if ‘we’ are seen to be joking about it.
I’ve been thinking lately that nuclear non-proliferation is probably a more pressing x-risk than AI at the moment and for the near term. We have nuclear weapons and the American/Russian situation has been slowly deteriorating for years. We are (likely) decades away from needing to solve AI race global coordination problems.
I am not asserting that AI coordination isn’t critically important. I am asserting that if we nuke ourselves first, it probably won’t matter.
Do you know that:
a) AM did not verify email addresses? Ie, you could register someone else’s email address and they may not know it.
b) AM had users in repressive regimes where non-hetrosexuals faced violence/death? For some of AM’s users, the promise of a discrete forum represented a less-dangerous way to find partners
c) Additionally, AM was generally know to be a good place for queer/gay/bi/etc users to hook up even in non-repressive regimes.
d) It’s unknown how many users were single or ethically non-monogamous.
e) It’s unknown how many users were researchers/journalists/or just simply curious.
I understand your post is a joke, however it’s in poor taste. Also even if everybody involved was demonstrably a cheater, I don’t think it’s good for EA’s image to be seen as a finger wagging movement.
A realist Millennial’s view of nuclear weapons by Matthew R. Costlow is a recent, interesting, and problematic short essay which more asserts than argues that the US would be more secure maintaining a large stockpile of nuclear weapons.
More interesting, I think, is the author’s assertion that current young activists have a weak understanding of the relevant policy, security and history issues. Costlow doesn’t mention Effective Altruism by name, but I suspect that within the movement we probably could stand to level-up our expertise on the area. Nuclear risks are easily existential level and complex problems, yet also potentially tractable over time, given focused attention and advocacy, both of which dropped off considerably after the end of the Cold War. Perhaps Effective Altruism should begin focusing a significant amount of energy on nuclear proliferation and deterrence theory, as well as the associated political, diplomatic, military, economic and historic concerns? The goal would be to find policy proposals and solutions intended to decrease the risk of large-scale nuclear exchanges.
I suspect that in the long-term one unit of lab-grown animal (meat | dairy | X ) might be less cruel than some current methods for getting an equivalent unit, but I don’t know that it’s a certainty. Getting tissues and cells to make cloned meat often means working with butchered animals to begin with. And the lab work involved in the R&D is enormously wasteful in terms of resources. Maybe that initial outlay of suffering is then counter balanced by having a suffering-free (or suffering-reduced) food system, but what if there’s an ethical cost to manipulating animal in a way that essentially treats them (or their cells/tissues) as raw/inanimate inputs for industrial biotech/agricultural processes? There was recently a pretty nice project at the Royal College of Art proposing a vertical farm of chickens engineered to only have brain stems. I think it gets to the crux of the problem of treating animals as raw material to be engineered.
ping!
I’m seriously considering attending the upcoming EA summit in SF. If you were at the 2014 summit, I’m curious what the experience was like. If you have any information about the 2015 version, I’d also be very interested.
There was an article about nano-satellites on slashdot this afternoon, which cites a $30k figure for an individual satellite build and launch. At that price, obviously it’s a tightly constrained package; the same source cites $200k for a cube-sat, which is a bit roomier.
People are starting to think of these types of assets as “relatively” cheap components in constellations—rather than launching one very high-value, highly capable sat, launch a cluster of smaller/cheaper sats, which can potentially evolve over time as some of them are de-orbited and replaced.
There are some obvious x-risk and EA applications (as well as many potentially non-obvious ones!), like tracking and searching for Near Earth approaching Objects (ie, killer rocks from space), as well as all sorts of earth-based imaging applications and potentially space commerce applications..
I’m guessing the sums of money involved are probably still outside what’s practical for most of us in the EA/x-risk community, but I expect that this is going to be a growth sector, which means that prices may very well come down a lot over the next few years. Thoughts?
Would anybody be interested in an x-risk reading group? I know MIRI’s been running one going through Superintelligence; I’d love to read a broad swath of x-risk related material, and meet with people to discuss either in person or online (or both). IRL, I’m in Toronto, Ontario.
Has the date for the 2015 EA Summit been set yet?
Does EA need [a] reputation system[s]?
Reputation systems are typically used by on-line platforms to help enable higher levels of trust between users.
1) My sense is that within EA there is a norm that we Do Favors For Each Other; ie, EAs often seem to have the subgoal ‘try to help other EAs, within reason’. This is both correct and lovely.
2) This norm may come under significant pressure as the community continues to scale. Will it be sustainable when the community has grown 10x? 100x? 1000x?
If both of these propositions are correct, then an EA reputation system may be worth thinking about. EA presents some interesting challenges as a big-tent social movement, spread across many different on- and off-line platforms. Some initial ideas of what a reputational system could look like:
Yet Another Webpage: eahub.org already supports profiles pages for EAs, with links to FB, this forum, lesswrong, etc. If most EAs have a page on eahub, with up-to-date links to their other on-line personas, maybe that’s enough?
A Score: something like karma or the rep systems reddit/stack-exchange use, but able to deal with the multi-platform nature of EA. There are significant technical and social challenges with scoring systems even when they are only on a single platform.
A web of trust: something like the PGP web-of-trust, where EAs could essentially vouch for each other.