I see. Thank you!
Xing Shi Cai
Hmm. I don’t see this in the Events page. Does anyone know why?
The only scenario where this could happen is if all of these people went completely untreated, which means that no local government would come in at any stage. This scenario is impossible
Can you elaborate why this is impossible, or at least unlikely?
More USAID staff ousted after Trump administration dismantles aid agency (Reuters) --- https://neuters.de/world/us/more-usaid-staff-ousted-trump-administration-dismantles-aid-agency-2025-02-02/
I also created a much shorter FAQ https://dku-plant-futures.github.io/faq/
Nicely written. Though my feeling with discussing why I am a vegetarian often shows that people are not really interested in knowing what’s really going on. Perhaps being willingly ignorant can offer some sort of comfort.
Here’s a skeptical take of o3 from a professional mathematician https://xenaproject.wordpress.com/2024/12/22/can-ai-do-maths-yet-thoughts-from-a-mathematician/
Do anyone know about a good source for the impact of factory farming on climate? Something like this post, concise yet comprehensive.
Thanks for listening. LLM indeed makes things up. 😃
https://notebooklm.google.com/notebook/de9ec521-56b3-458f-a261-2294e099e08c/audio It seems that I missed an “o” at the end. 😂
Thanks. I have translated two sections to Chinese here: https://dku-plant-futures.github.io/zh/post/factory-farming-as-a-pressing-global-problem/
Can I translate this to Chinese and publish it on my website? https://dku-plant-futures.github.io/
Made a NotebookLM podcast https://notebooklm.google.com/notebook/de9ec521-56b3-458f-a261-2294e099e08c/audi for this.
I work at a university in China, and with the help of some vegetarian students, I’ve been trying to encourage others to eat less meat. However, I’ve found it challenging to engage students who aren’t already interested in vegetarianism.
For instance, last semester, I organized a Meatless Monday Lunch every week. The same group of people I already knew would attend, but it didn’t attract new participants. I even offered free lunches to students to make it more appealing, but that didn’t seem to help.
I also hosted a documentary screening about the health effects of eating meat. Attendance was very low—fewer than 10 people showed up—and most of them seemed distracted, spending their time on their phones.
On the bright side, our canteen has improved its plant-based options with our help. I think this may encourage more people to try them. Unfortunately, I don’t have access to the canteen’s data, so I’m not sure if this idea actually worked. Personally it did make eating at the canteen a bit more pleasant.
I teach math to mostly Computer Science students at a Chinese university. From my casual conversations with them, I’ve noticed that many seem to be technology optimists, reflecting what I perceive as the general attitude of society here.
Once, I introduced the topic of AI risk (as a joking topic in a class) and referred to a study (possibly this one: AI Existential Risk Survey) that suggests a significant portion of AI experts are concerned about potential existential risks. The students’ immediate reaction was to challenge the study’s methodology.
This response might stem from the optimism fostered by decades of rapid technological development in China, where people have become accustomed to technology making things “better.”
Was it a mistake to start an organization like openai? People with good intentions creates a beast which they cannot uncreate. I had same feeling after watching Oppenheimer.
Just saw this on hacker news as a response to Sam Altman Exposes the Charade of AI Accountability. The damage for EA’s reputation is hard to estimate but perhaps real.
I think people have yet to realize that this whole AI Safety thing is complete BS. It’s just another veil, like Effective Altruism, to get good PR and build a career around. The only people who truly believe this AI safety stuff are those with no technical knowledge or expertise.
It will settle down soon enough. Not much will change as for most breaking news story. But I am thinking if I should switch to Claude.
How much credibility dose he still have left by backtracking?
Adrian Tchaikovsky, the science fiction writer, is a master at crafting bleak, hellish future worlds. In Service Model, he has truly outdone himself, conjuring an absurd realm where human societies have crumbled, and humanity teeters on the brink of extinction.
Now, that scenario isn’t entirely novel. But what renders the book both tear-inducing and hilarious, is the presence in this world of numerous sophisticated robots, designed to eliminate the slightest discomfort from human existence. Yet, they adhere so strictly to their programmed rules, that it only leads to endless absurdities, and meaningless ordeals for both robots and humans alike.
Science fiction writers, effective altruists, and Silicon Valley billionaires have long cautioned, that the rise of sentient, super-human artificial intelligence might herald the downfall of our own species. However, Tchaikovsky suggests a different, perhaps more mundane, and even more depressing scenario. He proposes that precisely because robots, no matter how advanced, lack free will, and cannot exercise their own volition in decision-making, they will not only fail to rescue us from impending environmental, political, and economic crises, but they will also be incapable of replacing us, by creating a better world of their own.
And, I believe, that is Tchaikovsky’s final warning to humanity. I hope that future historians, if they still exist —since there aren’t any left in Service Model —will regard him as a mere novelist, one who tries to capitalise on the general unease concerning advancements in artificial intelligence. Yet, I fear he may indeed be onto something.