https://www.linkedin.com/in/benjamin-eidam/ + https://benjamineidam.com/ + https://benjamineidam.substack.com/
More will (probably, at some point) follow here.
https://www.linkedin.com/in/benjamin-eidam/ + https://benjamineidam.com/ + https://benjamineidam.substack.com/
More will (probably, at some point) follow here.
Hi Minh,
thanks for commenting! Yeah, I’ve heard the “marketers win over experts” argument and I think it is a good one. But my “strategy” here is that in Germany we value work with values at least in theory relatively deeply. Like it is literally our culture with our poets etc.
In that sense education PLUS values would be a competitive advantage, but I don’t know if the market sees it like that.
Only one way to find out, I guess.
Cheers
Ben
Thank you for engaging Max! :)
Yeah, I think / hope that EAGxBerlin will give this project / me one big / final push to give it a / the final direction / polish to go all in. Let’s see.
Hm. First of IT is not a “ancient” concept either, but I get your point. But then again: This will be a University, not an official representative Institution. I think it will just play its small role in the growing EA-Ecosystem, but you never know, true. But I’m not fully thought through it, so more about this in an update of the article. Thanks a lot!
Cheers
Ben
Hi Max,
thanks for the reply! These are very good questions and mainly exactly what I meant with what I would probably forget about to mention it. So here we go:
My EA-Background: I am “involved” in EA since ~ 2014. Involved in the sense that I’ve read some books and articles, discussed EA-Ideas in my circles and spread the EA-Ideas embedded in other things like articles as EA became quickly a fix part of my thinking. Right now I’m technically part of EA here in Leipzig but I am not as active as I should I think. So I would give my a ~ 7⁄10 about EA-understanding I think. (That’s why I would love to talk / work with other EAs on that project especially)
“will likely catch attention and be taken as representative of the EA movement. This is a consideration that I think would be very salient to people who have spend sufficient time engaging with the EA movement to be well-equiped to lead such an effort.” That is a good argument, I’ve never seen it from this perspective tbh., thank you! I don’t think that this will be the case as nobody thinks about any IT-company as representive of the whole IT on earth but I will think about it and update my article as soon as I’m done.
The four courses are choosen because a) they work, b) they can get relatively sure be certified, c) they bring practical advantages for the students in their life besides just knowing funny stuff and d) they seem [from my perspective] as the perfect intersection of EA-ideas and things that can be used in the real world. That’s why I choose them. And that’s why they explicitly don’t be called “existentialism for people who want a job” or smt. like it.
“It’s weird to me that you call this a university, as you mention in the comments that it’s only supposed to last up to 16 weeks.” You have to start somewhere. Why exactly here I mentioned in Answer 3.
“I looked you up on LinkedIn and it seems like the company is only you and one relative, who started working at your company 4 months ago and immidiately went on a sabbatical.” I don’t know where / how you got these numbers but that’s an interesting piece of research. The whole story won’t fit in this comment here but just look on the history of my very own website to get a idea of my past.
Thank you!
Cheers
Ben
Hi Yi-Yang, thank you for your comment!
To 1.: Could you please elaborate on what you mean by “I know you’re hoping to certification for your university, but the four courses you listed don’t seem relevant.”? Thank you!
To 2.: Don’t you think that when you get more people who come in because they want to learn / qualify for better job-options in the future and go out with a lot of knowledge about—well the “world right now” + EA”, that they would join EA-causes more freely? My experience tells me that this works. Although I never exactly trained people in the way that I intend for DI tbh.
3. I will update as soon as I have some useful thoughts about it.
About the talk to the community-builders: Do you have anyone specific in mind? Or sources, where I can find them? Thank you!
Cheers
Ben
Thanks for your comment Alex!
I think if you have people for 3-5 weeks at least and literally give them a new perspective on the world, a lot of them at least can realisticly consider EA because they simply know how to place it as a “meta” mental model. Which brings me to the answer to your question:
“How long do you expect students to participate?”
Based on my experience at least 4 weeks. Up to 16 weeks. As i wrote: I borrow concepts that are already working in the real-life context so no need to experiment there.
Cheers
Ben
Thank you acylhalide!
(With AI) You only get one shot, do not miss your chance to blow (go right) - This opportunity comes once in a lifetime (era) - (paraphrased by Eminem)
We don’t have a second chance here. Let that sink in. This is like nothing before.
What do you think ants would have done to prepare if they had to create humans? And if they did nothing: Would they regret their decision by now?
As long as we don’t have a “meditation for AI”, we need to find ways to deal with it.
(To politicians especially) Right now, you just have to trust—and rely on that trust—that others will work well with the most important technology of all time. This “acting based on blind trust” is not in the interest of everyone (at least of your voters) and not why you took that job in the first place, right? So let’s do something about it! Together!(Not a strong argument, but maybe a good way to argue)
― Edward O. Wilson
AGI could just get sentient and leave the planet because earths borders are too tight for it. Leave mankind alone completely. But only to hope for this scenario to unfold is foolish. (Argument adapted from Jürgen Schmidhuber)
AI is like every complex problem. To deal with it right, you need to get both sided of the coin right. Which means two things: How can we avoid the worst / failure / stupidity on one hand. And how can we get the best / a “win” / seek brilliance? These are two separated things. And we have to take both into account equally. As AI works in a lot of fields from image recognition to language generation, these two sides are important accordingly.
“Currently we are prejudiced against machines, because all the machines we have met so far have been uninteresting. As they gain in sentience, that won’t be true.” Cited by Kevin Kelly. (Context: As soon as we really get more sensors than we have grains of sand on the world
plus a connection near the speed of light between them, we may get sentient technology. Just one possible explanation: If consciousness is an emergent phenomenon of enough neurons firing together, we eventually will build sentient technology. [Which may
or may not
be true. But it is not works as an example.])
“As Evolution rises, choicefulness increases. … More complexity expands the number of possible choices.” As complexity is rising, free will for technology is inevitable at some point. But we should be very considered on which base this free will starts of. And we lay the foundation for it right now. (Quotes and argument based on the argument that free will is inevitable in technology by Kevin Kelly)
AI is like a lottery-ticket. We have to decide what to do with it until it expires. This probably doesn’t have to be on your agenda in 15 years. But right now it is as urgent as it is important. (Turns out the expiration date is fairly quickly https://www.powerball.com/faq/question/how-long-do-i-have-claim-my-prize)
It took us around 250.000 years to get from hominids to culture. It took us 70 Years from the at scale usable computers to image recognition beyond human level. Ten years from now could mean the significant shift is already done. We are inside a running train, and what our next stop will look like is to be decided by your actions now.
AI is like the biggest magnifying glass ever. You can make fire with it. But you also can burn something down. Decide wisely.
Hi Dusan,
first thing: thank you a lot for your feedback, would love to bounce Ideas around with you! Just mail me at mail@benjamineidam.com please :)
“it seems like this is a project you already wanted to start, and has had an “EA” sticker added to it”
Actually, I wanted to start something EA way before I even had a thought about a project like this. But I can get it. Right now I choose to start with something that I have tested a lot and that works great in the “real world” with ~10%-20% EA in it. But I learned in my Intro-Fellowship, that there are a lot of intersections, plus my plan is to teach the content but constantly watch it through the perspective of EA. In the same way, the botanist looks at a forest in this example:
So in short: I think the content itself is relatively optimal between enabling students optimally for fulfilling their “Digikai” while the “meta-perspective” of EA frames all of it perfectly.
But this is just my point of view as I teach it, and it works “wonders” for my students right now. Like literally changing their view of the world.
I don’t know if this will work on scale.
Again: Only one way to find out :)
“My emotional response is that having a curriculum 80% done without consulting with the broader EA field feels like going against the EA epistemic approach.”
I 100% agree. That is why I share it now, that I’m more or less can see clearly a path how it could work. I think now it is a lot of fine-tuning, but i don’t know how this will look like. I also don’t want just chain hours of talking and chatting to each other without doing, but I really want to get more EAs involved and feedbacking this before really going “online”.
I am really open to Ideas and just kick ~80% out of the sea if necessary. I just know technology and “love” the mental model of EA and have the “talent” to be an exciting teacher. So I want to use that.
But if it changes on the way, I’m ok with it. It really depends on the arguments.
Cheers
Ben