Yes, given the posts earlier this week, I couldn’t resist… but let’s face it: AI, in the form of our current crop of large language models, has been touted as the technology that will change everything. Well, for humans, anyway.
And for humans, AI has proven problematic. Because, when it comes to Chosen Ones, you generally want them to do good. That’s one of the reasons they get the moniker of “Chosen One” versus “Eternal Adversary” or “the Stay-Puft Marshmallow Man.” This is one of the reasons Obi Wan is so upset in the film still above (though I’m sure being engaged in a lightsaber duel amid lava flows doesn’t help with emotional regulation).
Instead, AI has been the stated reason behind numerous job layoffs over the past few years… and obnoxiously, I’ve received a lot of anecdotal evidence of its impact in creative fields. This definitely feels like a betrayal of Obi Wan proportions. Simply put: automation, AI, and robots should make daily chores and drudgery easier to allow us to create NOT replace our creating so we have more time for drudgery. For the seven or nine regular readers of this blog, you’ll note I have not been bullish on AI for precisely these reasons. In fact, I have not been an early adopter…
But as mentioned before, I have since needed to analyze and weigh in about AI at my dayjob. Now, don’t expect to see me posting on Twitter or Threads or wherever about my six-step optimization for using AI to julienne fries anytime soon. However, I have been using it and seeing its value as a tool in various use cases.
Which leads to another problem of AI not fulfilling its role as the Chosen One. To borrow from Tony Martin-Vegue, AI is being pitched to us as Lieutenant Commander Data from Star Trek, but it’s really more like Captain Jack Sparrow from Pirates of the Carribbean. It’s not that it can’t help with planning, you just really need to monitor its work… and keep it away from the metaphorical rum that leads to hallucinations. So Martin-Vegue absolutely uses AI in his work, but approaches it with a clarity about its constraints which is refreshing. (and if your work includes cyber risk quantification and decision-making, definitely check out his new book, something I’ll likely post about in the future).
Unfortunately, my impression is that a whole lot of leaders are not approaching AI with the realization they’re dealing with an artificial pirate who is an unreliable narrator. And while their decisions may come back to bite them, I’m pretty sure some of those bites will be on our collective posteriors.
For example, the gamble on replacing software developers with AI is apparently already hitting some snags. Those of you who have dealt with tech program management have probably come across the concept of technical debt. Really exploring the concept is probably a post or three in itself, but you can think of it as short term development wins that are then a drag on future development and easy maintenance: just like putting something on your credit card today can solve an immediate problem, but eventually you need to deal with that debt.
For a lot of us who have worked at or are working in larger organizations, this is an inescapable reality: on average, there are always highly customized systems, often on legacy hardware and software, often created in-house, that have escaped holistic development for so long that they reach a breaking point. Maybe it’s an inability to integrate with other systems. Maybe it’s being locked into a workflow that made sense 10 years ago, but now desperately needs change. And it’s painful: whether you want to modernize the system, replace it, or even just guard against all the short-term enhancements that might be great now… but lead to more technical debt. So you need not just good developers to see the big picture, you need experienced program managers to see the big picture and deal with those pesky humans. Getting consistent support from leadership ain’t a bad thing either.
Guess what AI developers are lacking?
And this leads to another aspect of implementing AI that I recall a Google AI expert talking about years ago: you need to figure out your new processes. AI can’t do everything and eventually you need experienced people in the AI loop to replace the experienced people around now who will leave or retire. In other words: what are you entry-level people going to do now so they can be your mid-level and senior people later?
And, while organizations are doing that, it’s probably a good idea to start thinking about what parts of the job AI can take care of. This is a phenomenon I’m seeing more mention of: the “unbundling” of jobs into tasks. Just like a person’s whole job not being able to be replaced by AI, this also makes sense, but just like technical debt above, it strikes me that analyzing how this will work at one’s organization needs a critical mass of people looking at the big picture (including the inevitable discussions about what that big picture is), trials and errors in adjusting how you work, and consistent leadership engagement that acknowledges this is new territory for everyone.
So, given the lack of consistent big picture thinking I’ve seen tackling technical debt, I’m not rosy about how we all adapt to implementing AI in a thoughtful, measured way.
BUT, if you or others need something to help you ponder about how AI can be implemented, consider MIT’s Iceberg Index. Basically, they’re trying to figure out what tasks in what industries AI can replace humans, not simply hoping it’ll be the Chosen One.
But just as you should manage your expectations with an AI agent’s output, you should manage your expectations for thoughtful AI implementation. As a historical rule, people love their silver bullets and Chosen Ones. And trying to take your time to think through things methodically to understand the big picture and maybe predict likely outcomes? That should be someone else’s problem.
Maybe AI can do it!