Ex-Apple ML engineer with some research and entrepreneurial background. Technical AI alignment research, but am also interested in the bigger problem which I call Human alignment. I’m starting a new project, which aims at tackling one aspect of this bigger problem. I’m a long term member of the Czech EA and LW community, attended CFAR workshop.
hrosspet
I believe improving (group) epistemics outside of our bubble is an important mission. So great you are working with policy makers!
Hi Niplav, thanks for your work! I’ve been thinking about doing the same, so you saved me quite some time :)
I made a pull request where I’m suggesting a couple small changes and bug fixes to make it more portable and usable in other projects.
For other readers this might be the most interesting part: I created a jupyter notebook loading all datasets and showing their preview. So now it should be really simple to start working with the data or just see if it’s relevant for you at all.
If you’d like to collaborate on this further I might add support for Manifold Markets data and Autocast dataset, as that’s what I’ve been working with up till now.
I’d also add that virtues and deontologically right actions are results of a memetic evolution and as such can be thought of as precomputed actions or habits that have proven to be beneficial over time and have thus high expected value.
Not all conscious experiences are created equal.
Pursuing those ends Tyler talks about helps cultivate higher quality conscious experiences.
Not sure how seriously you mean this, but news should be both important and surprising (=have new information content). I mean, you could post this a couple times, as for many non-EA people these news might be surprising, but you shouldn’t keep posting them indefinitely, even though they remain true.
Thanks for sharing, will take a look!
This is my list of existing prediction markets (and related things like forecasting platforms) in case anyone wants to add what’s missing..
https://www.metaculus.com/ https://polymarket.com/ https://insightprediction.com/ https://kalshi.com/ https://manifold.markets/ https://augur.net/ https://smarkets.com/
Interesting experiment!
One argument against the predictive power of stories is that many stories evolved as cautionary tales. Which means that if they work, they will have zero predictive accuracy. Which would also possibly fit this particular scenario
disclaimer: I’ve read in full only “Takes for Self-improvers, clients, people ‘bought into’ self-development” which I’m mostly interested in, skimmed the rest
Thanks for the writeup! I’d be interested in hearing your thoughts on how I should figure out the value of getting coaching.
My current approach is to do a lot of self-coaching myself and only when I feel like I’m stuck for a longer period or feel overwhelmed I reach out to a coach/therapist. Then, I use the sessions not only to figure out the object-level problem, but also I try to learn how to become a better self-coach by reflecting on the sessions on a meta-level (so that I don’t need them anymore).
There is of course an opportunity cost—I could just get coaching sessions regularly, regardless of whether I’m stuck, or not, and focus on my thing—engineering/parenting/founding/… But if I’m gonna save the time/effort by not learning to coach myself and instead outsource the coaching skills to others, am I not gonna need them in the future?
There is of course always the benefit of having another person check on my thinking and hear their perspective, but that doesn’t need to be a coach, it can be a domain expert, if my self-coaching skills are good enough.
To sum this up, what am I likely missing with this approach?