Is there a potential naming collision with https://www.asteriskjournal.org ? Are you going with ‘asteriskmag’ as the domain name perhaps?
Azure
- 27 May 2022 10:14 UTC; 7 points) 's comment on Introducing Asterisk by (
Human Challenge Trials as a wiki entry(less so as a tag)
An idea I think a sizeable portion of people here are sympathetic to and the entry could act as a good companion to entries like 1Day Sooner and COVID-19 pandemic
Thanks Pablo! Looks great! I really appreciate your work on the wiki.
Windfall Clause (under Global Catastrophic Risk (AI))
Justification:
Important as a wiki topic to give short description of this policy proposal and relevant links/papers/discussion- as it seems like an important output of AI Governance literature/studies.
Tag as potential future posts may discuss/critique the idea(e.g. second post below)
Posts that it could apply to:
Thanks for your help and guidance. I agree that for now it’s not worth it!
I didn’t really have a preference to be honest! I was just curious and a little confused by the fact that some posts and one of the “further reading links” used the “anti-aging” terminology.
Thank you for point about the general format being “[Area] research”—that makes sense and will be useful to me for potential future wiki edits. Also thank you to the other comment for the “cancer research” analogy - that makes sense too.
Is it worth updating the style guide for the “[Area] research” convention or is it too niche and may add unnecessary bloat?
Thank your for this piece Lizka!
To what extent do you agree with the following?
Strong identities are epistemically sub-optimal(i.e. if you are an agent whose exclusive concern is to hold true beliefs, then holding strong identities is likely to harm that project) - but they may be socially expedient (perhaps necessary) in that they facilitate cooperation (by encapsulating views and motives)
May I ask for the reasoning for the title being “Aging research” as opposed to “Anti-aging research”?
I must assume it’s because the former is the name established in the academic literature? Or is it to maintain some kind of fact/value distinction?
Thanks in advance!
Patient philanthropy?
Hi Peter.
Thank you very much for this! It’s much appreciated and I’m glad my comments were somewhat helpful.
Perhaps you may wish to submit the new version as a new, separate post?
I think I would also contact Aaron Gertler, the forum moderator, to get some feedback if you chose to post the above as a separate post. All the best.
Hi Peter!
Thank you for the write up!
You’re currently getting downvoted(unfortunately I think!), but I thought I would try to flesh out some reasons why this is the case currently, potentially to spur on discussion:
1. Whether unintentional or not, the ‘flat earth’ images do not seem to be a favourable presentation of your ideas and do not seem necessary to make the claims you are making.
2. There is not much structure to the post. I think we would appreciate it if you had some introduction and conclusion on what you are trying to address and how you’ve done so.
3. Some of the explanations are quite confusing (at least to me), e.g. it’s not clear what you mean exactly by
‘It can brighten—improving the enterpretation[sic] of a given sentience from a darker to a brighter sentience’
Does this mean ‘higher utility/welfare’?
4. I don’t think the post is sufficiently self-contained and free standing to make a credible case.
Also keen to hear whether people agree/disagree with the above!
Adding one more (hopefully relevant) link:
Dylan Matthews on “Global poverty has fallen, but what should we conclude from that?”
which is more or less a podcast version of the Vox Article by Dylan Matthew, where the link (and Hickel’s response) can be found in Max_Daniel’s very helpful list of links.
Hey! Your link sends us to this very post. Is this intentional?
Thank you for this post! Very interesting.
(1) Is this a fair/unfair summary of the argument?
P1 We should be indifferent on anti-speciesist grounds whether humans or some other intelligence life form enjoy a grand future.
P2 The risk of extinction of only humans is strictly lower than the risk of extinction of humans + all future possible (non human) intelligent life form.
C Therefore we should revise downwards the value of avoiding the former/raise the value of the latter.
(2) Is knowledge about current evolutionary trajectories of non-human animals today likely to completely inform us about ‘re-evolution’? What are the relevant considerations?
Additionally, is it not likely that those scenarios are correlated?
I think the idea is to assign credences to plausible theories, where plausible is taken to mean some subset of the following:
Has been argued for in good faith by professional philosophers
Has relevant and well-reasoned arguments in favour of it
Accords at least partially with moral intuitions
Is consistent/parsimonious/not metaphysically untoward/precise/ etc (the usual desiderata for explanations/theories)
Concerns the usual domain of moral theories(values, agents, decisions, etc)
Another equivalent way to proceed is to consider all possible theories, but the credence given to the (completely) implausible theories is 0 or sufficiently close to it.