Quick thing I flagged: > Probably a spectrum is far too simple a way of thinking of this. Probably it’s more complicated, but I think economic forces probably push more toward the middle of the spectrum, not the very extreme end, for the reason that: suppose you’re employing someone to plan your wedding, you would probably like them to stick to the wedding planning and, you know, choose flowers and music and stuff like that, and not try to fix any problems in your family at the moment so that the seating arrangement can be better. [You want them to] understand what the role is and only do that. Maybe it would be better if they were known to be very good at this, so things could be different in the future, but it seems to be [that] economic forces push away from zero agency but also away from very high agency.
I think this intuition is misleading. In most cases we can imagine, a wedding planner that attempts to do other things would be bad at it, and thus undesirable. There’s a trope of people doing more than they’re tasked for, which often goes badly—one reason is that given that they were tasked with something specific, the requester probably assumed that they would mess up other things.
If the agent is good enough to actually do a good job at other things, the situation would look different from the start. If I knew that this person who can do “wedding planning” is also awesome at doing many things, including helping with my finances and larger family issues, then I’d probably ask them something more broad, like, “just make my life better”.
In cases where I trust someone to do a good job at many broad things in my life or business, I typically assign tasks accordingly.
Now, with GPT4, I’m definitely asking it to do many kinds of tasks.
I think economic forces are pushing for many bounded workflows, but because that’s just more effective and economical—it’s easier to make a great experience at “AI for writing”—not because people really otherwise would want it that way.
Thanks for writing this up!
Quick thing I flagged:
> Probably a spectrum is far too simple a way of thinking of this. Probably it’s more complicated, but I think economic forces probably push more toward the middle of the spectrum, not the very extreme end, for the reason that: suppose you’re employing someone to plan your wedding, you would probably like them to stick to the wedding planning and, you know, choose flowers and music and stuff like that, and not try to fix any problems in your family at the moment so that the seating arrangement can be better. [You want them to] understand what the role is and only do that. Maybe it would be better if they were known to be very good at this, so things could be different in the future, but it seems to be [that] economic forces push away from zero agency but also away from very high agency.
I think this intuition is misleading. In most cases we can imagine, a wedding planner that attempts to do other things would be bad at it, and thus undesirable. There’s a trope of people doing more than they’re tasked for, which often goes badly—one reason is that given that they were tasked with something specific, the requester probably assumed that they would mess up other things.
If the agent is good enough to actually do a good job at other things, the situation would look different from the start. If I knew that this person who can do “wedding planning” is also awesome at doing many things, including helping with my finances and larger family issues, then I’d probably ask them something more broad, like, “just make my life better”.
In cases where I trust someone to do a good job at many broad things in my life or business, I typically assign tasks accordingly.
Now, with GPT4, I’m definitely asking it to do many kinds of tasks.
I think economic forces are pushing for many bounded workflows, but because that’s just more effective and economical—it’s easier to make a great experience at “AI for writing”—not because people really otherwise would want it that way.