I don’t like when animal advocates are too confident about their approach and are critical of other advocates. We are losing badly, meat consumption is still skyrocketing! Now is time to be humble and open-minded. Meta-advice: Don’t be too critical of the critical either!
If you cannot tell Duncan Sabien is an abusive person from reading his facebook posts you should probably avoid weighing in on community safety. He makes his toxicity and aggression extremely obvious. Lots of people have gotten hurt.
(Of course there is other evidence, like the fact he constantly defends bad behavior by others. He was basically the last person publicly defending Brent. But he continues to be conisdered a community leader with good judgment)
At this point, unless you are very talented and/or working at Anthropic/OpenAI/Deepmind, I dont see much reason to avoid working in AI. The timeline is already burnt. The people who burnt it, often in the name of altruism, should be ashamed. But at some point the benefits of trying to do good things with a dangerous technology outweigh the downsides of accelerating progress. Prior to ~now it was quite bad to work on AI in more or less any capacity. But the train is leaving the station anyway. Marginal impacts are now smaller than the plausible positive impact of using the tech for good. Accelerating AI was an incredibly dumb strategy but at this point might as well play to the out where alignment isn’t that hard.
My attempt at a reasonable AI/semis portfolio:
MSFT − 10%
INTEL − 10%
Nvidia − 15%
SMSN − 15%
Goog − 15%
ASML − 15%
TSMC − 20%
Interested if anyone thinks I got this hugely wrong.
I don’t like when animal advocates are too confident about their approach and are critical of other advocates. We are losing badly, meat consumption is still skyrocketing! Now is time to be humble and open-minded. Meta-advice: Don’t be too critical of the critical either!
If you cannot tell Duncan Sabien is an abusive person from reading his facebook posts you should probably avoid weighing in on community safety. He makes his toxicity and aggression extremely obvious. Lots of people have gotten hurt.
(Of course there is other evidence, like the fact he constantly defends bad behavior by others. He was basically the last person publicly defending Brent. But he continues to be conisdered a community leader with good judgment)
At this point, unless you are very talented and/or working at Anthropic/OpenAI/Deepmind, I dont see much reason to avoid working in AI. The timeline is already burnt. The people who burnt it, often in the name of altruism, should be ashamed. But at some point the benefits of trying to do good things with a dangerous technology outweigh the downsides of accelerating progress. Prior to ~now it was quite bad to work on AI in more or less any capacity. But the train is leaving the station anyway. Marginal impacts are now smaller than the plausible positive impact of using the tech for good. Accelerating AI was an incredibly dumb strategy but at this point might as well play to the out where alignment isn’t that hard.