Friend of animals :D
Yellow (Daryl)
Nice ai pics :D
For every dollar someone lost, someone else gained a dollar, I wonder where that money went.
If you think about the fraud SBF did, it could technically be thought of as merely a redistribution of wealth from those who held FTX to those who shorted ftx and almeda.
Could it be possible that this redistribution was good?
Yellow (Daryl)’s Quick takes
Interesting point, that what’s at stake here is the delta between an excruciating death now vs a few years of wild animal life and painful death later.
Thanks, I have iodine supplements but wasn’t sure if I should take them because of confusion around how much is too much,
But not being a table salt person looks like I should take at least a little!
Interesting suggestion, curious as to what other ideas are out there and why the one suggested here may be preferred
Looks like that list of possible things that could be impacted was copied from this source:
https://www.exploreveg.org/2023/06/26/urgent-alert-animal-welfare-and-state-rights-under-threat-from-eats-act/
GPT4 says it’s unlikely that those would be impacted by the bill: https://chat.openai.com/share/49d449b2-c753-4fc3-bee3-61179832fa5d
This seems more accurate in terms of discussion of what laws may be impacted:
https://sentientmedia.org/8-key-laws-threatened-by-eats-act/
Are we sure it’s for that very reason?
Seems plausible, but would be nice if we had data around this
Very likely so(imo).
But possibly there are other important questions as well relating to the impact of improving global heath. Such as:“Does improving global health increase innovation”
“Does a faster rate of innovation make a world where large-scale bioengineering helps animal welfare come faster?”(ex. via wild animal welfare, or clean meat)
Are we sure the domains being compared are similar in supply and demand? Ex. Low expertise/experience swe make more than similarly experienced ui designers
“It forces the suffering-concerned agents to make trade-offs between preventing suffering and increasing their ability to create more of what they value. Meanwhile, those who don’t care about suffering don’t face this trade-off and can focus on optimizing for what they value without worrying about the suffering they might (in)directly cause.”
Hi Jim, is it really the case that spending effort preventing suffering harms your ability to spread your values? Take religious people, they spend much effort doing religion stuff which is not exactly for the purpose of optimizing their survival, yet religious people also tend to be happier, healthier, and have more children than non religious people(no citation, seems true though if all else equal).
Could it be the case those whom don’t optimize for reducing suffering, instead of optimizing for spreading their values, optimize or do something else that decreases the likelihood of their values spreading?
Also, why now? Why haven’t we already reached or are close to equilibrium for reducing suffering vs etc since selection pressures have been here long time
Interesting, I wonder if AGI will have a process for deciding it’s values(like a constitution). But then the question is how it decides on what that process is(if there is one).
I thought there might be a connection between having a nuanced process for an agi to pick it’s values and problem solving ability(ex. How to end the world), such that having the ability to end the world must mean that they have a good ability to work through nuance on their values and think it may not be valuable. Possibly this connection might not always exist in which case, epic sussyness may occur
Not too sure how important values in data sets would be. Possibly AGI’s may be created different than current LLMs in simply not needing a dataset to be trained from
I’m not too confident that AGI’s would be prone to value lock in. Possibly I am optimistic about ai, but ai already seems quite good at working through ethical dilemmas and acknowledging that there is nuance and conflicting views on morals. It would seem like quite the blunder to simply regard the morals of those closest to them as the ones of most importance.
Are replacement pledge pins available for purchase? (didn’t lose mine yet but I would feel better knowing I have that option if I do, my pledge pin is my most expensive possession I own xd so am a bit hesitant taking it outside)
Woa this is cool.
Global Food Partners created a producers directory called cagefreehub.globalfoodpartners.com for Asia sellers. GFP has done a lot of work to reach out to sellers for it, maybe useful to chat with gfp more about what it is and if it could be useful for Africa. (note: I am the web developer of cagefreehub)
both Apple’s App Store and Google Play Store have policies in place that require you to provide clear and transparent communication to users about charges before processing in-app purchases.
Apple’s App Store Review Guidelines state that you must “clearly and accurately describe the in-app purchase and its terms” to users before they initiate the purchase. This means you need to provide sufficient information about the charge, such as the price, what the user will receive in return, and any applicable terms or conditions. It’s essential to obtain explicit user consent before processing any charges.
Similarly, Google Play’s Developer Policy Center requires that you “clearly disclose the price, currency, and the specific functionality or content users are purchasing” and “obtain explicit consent from the user before charging them.” The pricing information should be presented in a way that is easily understandable to users.
To comply with these policies, it is common practice to display a clear and prominent dialog or popup within your app, informing users about the charge and requesting their consent before proceeding with the purchase. This popup should provide a clear description of the purchase, the associated cost, and any relevant terms or conditions.
Thus, having a button to click that charges 1$ isn’t possible without a popup between each click.
Anyone got ideas for alternatives?
Hi Jeremy
I think an AGI would be much better at creating arguments for why humanity should not be eliminated. If an agi is incapable of creating these arguments itself I wonder if it capable enough to destroy humanity.
I think the thing to worry about more is that an agi correctly determines that humanity, or most of humanity needs to be destroyed (ex. Agi cares about all life and all humans murder there face mites so they must be stopped). But is that really all that bad?
The Nuclear football is a lie?!! TIL