That’s true, but that comment was only meant for you, who seemed confused about what kind of ‘should’ you should use in a normative sentence. I took for granted that you already knew ‘normative’, because you had posted a nice and useful answer to the original question.
Leo
Aristotle would answer “‘should’ is said in many ways”. I was of course thinking of the normative ‘should’, which I believe is the first that comes to mind when someone asks about normative sentences. But I’d be highly interested in a different kind of counterexample: a normative sentence without a ‘should’ stated or implied.
There’s a ‘should’ either stated or implied.
To achieve this you could create a “community user” and share the pass on top of the post. People would login with it, make changes and explain them in the comments. Not sure if sharing the pass would be against the Forum’s rules.
It has happened to me that when trying to make an edit I accidentally click ok on the warning that says “We’ve found a previously saved state for this document, would you like to restore it?”, thus restoring an old version of the article and reverting someone else’s edits.
I don’t think I will elaborate on policies, given that they are the last thing to worry about. Even RP negative report counts new policies among the benefits of charter cities. Now we are supposed to have effective ways to improve welfare, why wouldn’t we build a new city, start from scratch, do it better than everybody else, and show it to the world? While I agree that this can’t be done without putting a lot of thinking into it, I believe it must be done sooner or later. From a longtermist point of view: how could we ever expect to carry out a rational colonization of other planets when nobody on earth has ever been able to successfully found at least one rational city?
Mere libertarians may have failed, as anarchists did in similar attempts. But I believe that EAs can do better. An EA city would be a perfect place to apply many of the ideas and polices we are currently advocating for.
Here is an even more ambitious one:
Found an EA charter city
Effective Altruism
A place where EAs could live, work, and research for long periods, with an EA school for their children, an EA restaurant, and so on. Houses and a city UBI could be interesting incentives.
Kelsey Piper has written an excellent article on different ways to help Ukrainians, including how to donate directly to the Ukrainian military. But she wisely points out that “[s]uch donations occupy a tricky ethical and even legal area… A safer choice would be to direct money to groups that are providing medical assistance on the ground in Ukraine, like Médecins Sans Frontières or the Ukrainian Red Cross.”
This is the only post that quoted it last year. It explains the idea, but it doesn’t look like the one you’re looking for.
Every culture has always been concerned about the future, the afterlife and so on, but it seems to me that worries about “remote” future generations are relatively recent. There are probably isolated counterexamples, though, which I believe are the ones you are looking for. Aside from that, in the animal reign, there is of course the instinctive concern about the “next” generation, which is in turn reproduced in every following generation.
Could anyone help me downvote the ‘Job listing (open)’ tag? Applications closed two days ago. Thanks