
A few months ago, I walked into the office of one of our clients, a publicly traded vertical software company with tens of thousands of small business customers. I expected to find a traditional support team with rows of agents talking on the phone, sitting in front of computers analyzing tickets. Instead, it looked more like a control room.
There were specialists monitoring dashboards, tuning AI behavior, debugging API failures, and iterating knowledge workflows. A team member who had started his career handling customer questions via chat and email (resetting passwords, explaining features, troubleshooting one-off issues, and escalating bugs) was now writing Python scripts to automate routing. Another was creating quality scoring models for the company’s AI agent.
This seemed markedly different from the hyperbole I had been hearing about customer service roles disappearing largely due to AI. What I was seeing in our customer base seemed more like a change in the way support work is defined.
So I decided to take a closer look. I analyzed 21 customer service job openings at AI-native companies, high-growth startups, and enterprise SaaS. These jobs range from technical support for complex software products to more transactional business support involving billing and other common issues.
What I discovered was that customer service is being rebuilt around native AI workflows and systems-level thinking. Yes, responding to individual tickets is still important, but the roles are to design and operate the technical systems that solve customer problems at scale.
The result is a new type of support function, one that is part operator, part technologist, and part strategist.
AI skills are now up for grabs
For most of the last two decades, support hiring has been optimized based on communication skills and product familiarity. But that baseline no longer exists.
Of the 21 job postings I analyzed, nearly three-quarters explicitly required experience with AI tools, automation platforms, or conversational AI systems.
These functions consist of configuring, monitoring and improving artificial intelligence systems over time. They are reviewing conversation logs, auditing AI behavior, and identifying failure modes.
In other words, AI literacy has become the foundation of modern support work. If you don’t understand how AI systems behave, you won’t be able to support the customers who rely on them.
More than half of the roles I reviewed required candidates to debug APIs, analyze logs, write SQL queries, or script automations in Python or Bash. Many expected to be familiar with cloud infrastructure, observability tools, or version control systems like Git.
This would have been unthinkable in support job descriptions even five years ago.
But it makes sense. When AI systems fail, they fail at scale. Diagnosing those failures requires technical fluency, such as understanding how models interact with external systems and when a problem is rooted in configuration versus product logic.
The work has evolved from solving problems ticket by ticket to preventing the next thousand tickets.
Humans are needed to solve tougher problems
Once AI becomes part of the support workflow, the nature of the work becomes more technical. One support leader I spoke to at a company that now houses more than 80% of its tickets with AI put it clearly: Once automation handles the easy questions, the work that remains becomes more difficult. The same frontline agents who used to focus on quick results are now dealing with the most frustrated customers and edge cases, and they’ve had to expand their skills accordingly.
In practice, this often looks like a client trying to complete a critical workflow, such as synchronizing data between systems before running billing. An AI agent starts by working with documentation that a subject matter expert has synthesized from multiple functions across the enterprise. From there, the AI agent can confirm that everything is configured correctly. However, the AI agent may not be integrated into the correct underlying system that silently failed hours earlier. The client follows the guide, only to discover that the data did not move as expected. When the problem escalates, the subject matter expert has to reconstruct what happened in the systems, reason out what the AI agent missed, and help the customer recover without losing trust.
This is the kind of end-to-end work that AI can’t yet do on its own. It requires technical fluency to trace faults across disparate systems, as well as human judgment to decide what can be fixed immediately versus what needs deeper product or engineering intervention. This way, support has focused less on answering questions in the manual and more on creating the manual and resolving issues it doesn’t cover.
Hybrid human-AI model is the default
Despite widespread fear that AI will replace support jobs, not a single post I analyzed suggested that support would be 100% automated in the future.
Instead, almost all roles gravitated toward a hybrid model in which AI handles routine interactions, while humans monitor quality and continually improve the system.
This makes sense when you consider the fact that 95% of customer service leaders said they would hire human agents in their operations to help define the role of AI when Gartner surveyed them last year.
Titles like “AI Support Specialist,” “AI Quality Analyst,” and “Support Operations Specialist” focused almost entirely on orchestration, designing escalation logic, and defining when humans intervene.
This is where the above image of the “control room” comes true. The work of humans shifts from simply answering questions to shaping systems.
Together, these trends point to a single conclusion: customer service is becoming more specialized. Repetitive work is disappearing, but technical and judgment-laden work is expanding. That change is already visible in the way companies hire. The question now is whether organizations (and workers) are prepared to adapt quickly enough.

