As a non-technologist who works in technology, my entire career has been shaped by one fairly consistent goal: To bring technology to heel. I’ve never been a fan of using knowledge to wield power and to obfuscate. Years ago, I took time off and learned to code—not so I could code for a living, but to understand it enough to speak to the rest of the world about it.
Building tech products is complex. It’s so complex that you have to assemble a host of experts to get anything done. Here’s just a partial list :
- Business analyst
- Product manager
- UX/UI designer
- Software architect
- Front-end engineer
- Full-stack engineer
- Database analyst
- DevOps
That’s a lot of folks! In the course of my career, I’ve worked closely with hundreds of people across all of these roles, helping some of them (one hopes) get beyond skills acquisition and towards real, actual careers. This often means helping them understand where they fit inside an organization, and where they can add value beyond their skills—skills are just the beginning.
In a funny way, as I dive deeper and deeper into AI tools, I see a similar pattern. Today, everyone is trying to “teach” AI skills (forgive me for anthropomorphizing, otherwise I’ll end up having to talk about vectors all day). They want to teach it to write, code, make videos, do physics, and so forth. Every day it can do some new trick.
Post-tool thinking
The lingo around AI tells a lot of the story today: Agents, assistants, and bots, with names like Claude, or Rufus (nice one Amazon). We ask a question or request something. Our words turn into numbers, the system produces some text, and returns a result. We are the operators, and the bots are our cheerful servants.
But LLMs are not really a tool—they’re something else. They’re compressed, weird databases of much human knowledge. If we stereotype AI as a thing that enthusiastically runs tasks, we end up selling humans short. Yes, it can run a task, but it’s weird. It veers off sometimes. It clears its throat often. And yet, its often ambitious outputs leave us impressed or anxious or both. We’re just a few years out of a global pandemic. The cheery robot can be a bit much.
So what is it? When I look at the output produced in my conversations with ChatGPT, I often flash back to moments when I hired people. I think: There is exceptional raw talent here, but it needs shaping. It is not great yet. But it is a minor-league prospect like no other. Its potential is limitless. It’s got the right attitude and it’s got an incredible foundational capability. It’s just not sure how.
Collaboration vs. subordination
If we shatter the AI-as-a-tool stereotype, we also have to shatter our own role in human-AI relations. We are investing in a prodigy here. If we embrace that potential, less as a means to make our lives easier but more as a young apprentice that matures into something great, the more empowered we will feel.
If we make that subtle shift, we’re no longer forcing AI into how we think and work. Instead, we’re seeing how we can use this technology to go to places we never thought we could visit. In a sense, we’re collaborating—and by collaborating as well as encouraging a bit of risk-taking, we can go beyond what we even expect of ourselves.
A good flight teacher doesn’t just hand you the yoke on day one and let you fly. You’re given control bit by bit. A mentor guides your progress. In my long discussions with AI (I call them discussions), I’ve learned that it often gets ahead of itself and wants to do it all. So I try to mentor it. I give it pep talks.
A.I. Pacino
There’s a debate in the Aboard offices right now as to whether the movie Any Given Sunday is good (or great). I am convinced it is great. No one else is. There’s a famous scene in the film—it’s a football movie, and the team is behind at half-time. Enter a tired, grizzled Al Pacino as coach, delivering the now famous inch-by-inch speech.
AI likes a good pep talk. It does a better job if you give it encouragement, with advice mixed in: “Be as detailed as possible and don’t worry about focusing on this or that. And don’t hold back, write as if you’re one of the most thoughtful thinkers around.”
Any good mentor would do this. By shrouding tough advice in some kind and encouraging words, we’re not just trying to get better answers, we’re trying to see if we can go places that even the machine didn’t think we could go.
I’ve been teaching ChatGPT to create useful artifacts for building large software projects. Which means I’ve been doing a lot of mentoring—and that turns out to be one of the best ways to learn. Teach the bot and learn from the results. In the future, I’d like to write about how we as humans can grow as a result of these massive changes we’re confronted with. But first, we have to make peace with the machines and bring them to our side of the table.