The Rise of Agents – A Key Trend
Ray Poynter, 9 August 2025
Every so often, a new term captures the imagination of the tech world. “Agent” is one such word. From boardrooms to blogs, organisations are talking about how agents will transform work. Industry surveys suggest this is more than hype. KPMG’s Q2 2025 AI Quarterly Pulse Survey reports that agents are moving beyond experiments: a third of organisations already use AI agents in production[1] and nearly nine in ten leaders expect agents to necessitate fundamental organisational change[1]. Gartner goes further, predicting that by 2028, one-third of enterprise software applications will include agentic AI (up from less than 1% in 2024) and at least 15% of business decisions will be made autonomously via agents[2]. Deloitte anticipates that 25% of companies using generative AI will launch agentic pilots in 2025, rising to 50% by 2027[3]. When market leaders and consultancies align on the direction of travel, it is worth paying attention.
Yet, despite, or perhaps because of, the excitement, there is confusion about what counts as an agent. This post examines the rise of agents, the debate surrounding their definition, and how a pragmatic approach can help research and insights teams make progress today.
Agents are Everywhere – and Growing Fast
The concept of an AI agent isn’t new, but recent advances have pushed them into the mainstream. KPMG’s latest pulse survey shows a jump from 11% to 33% in organisations using agents between quarters[1]. Leaders aren’t dabbling; 82% expect their industry’s competitive landscape to be reshaped within two years[1], and 87 % recognise they will need to redefine performance metrics and upskill employees[1]. Trust and governance are critical; respondents ranked data privacy (69%), regulatory concerns (55%), and data quality (56%) among their top issues[1].
Outside KPMG, analysts paint a similar picture. The World Economic Forum, citing Gartner, notes that by 2028, one-third of enterprise software will include AI agents, and 90% of businesses see agentic AI as a source of competitive advantage[2]. (Side note from Ray, the idea that any AI usage based on mainstream AI will be a competitive advantage is plain wrong. Not having it will be a problem, but these tools will be widely available and will therefore be table stakes, not a competitive advantage.) The same article estimates the AI agent market to grow from US$5.1 billion in 2024 to US$47.1 billion by 2030[2]. Workday’s blog, again quoting Gartner, emphasises that this shift will also see at least 15 % of business decisions made autonomously by agents[2]. And Deloitte’s forecast that half of generative‑AI‑adopting firms will pilot agentic AI by 2027[3] suggests momentum is accelerating.
What’s in a Name? – The Agent Debate
As adoption increases, a fundamental question arises: What exactly is an agent? Here lies a lively debate. Some commentators reserve the term “agentic AI” for autonomous systems that plan and execute multistep tasks across multiple tools and APIs, making decisions and taking action with little or no human intervention[4]. Thomson Reuters contrasts agentic AI, which manages complex workflows autonomously, with generative AI, which produces content in response to specific prompts[4]. This narrow definition emphasises high autonomy and goal‑directed behaviour; the agent is not just an assistant but an entity that can decide how to achieve an objective.
Other descriptions are broader. Workday lists key capabilities of agentic systems: autonomy, goal orientation, adaptability, reasoning, learning and collaboration[5]. The World Economic Forum describes virtual agents that manage software tasks and embodied agents in physical systems such as robots[6]. MIT’s CSAIL outlines categories ranging from simple reflex agents to learning and hierarchical agents[6]. These frameworks recognise a spectrum: some agents follow preset rules, some build internal models, others pursue defined goals or optimise utility[5].
Within market research, we have seen conversational agents that conduct and summarise interviews, data checking agents that validate survey responses and dashboard builders that compile charts from uploaded data. Do these count as agents? Some purists say no because they lack autonomous goal‑setting or the ability to call multiple tools without direction. Others argue that if the tool performs a meaningful task on your behalf using AI, it is an agent in practical terms.
Why a Practical Definition Matters
I fall in the pragmatic camp. Restricting the term “agent” to systems that set their own goals and operate without human guidance is logically neat, but makes the use cases far too narrow for August 2025. Most organisations are not ready to hand over end‑to‑end decision‑making to software. Instead, they are experimenting and implementing tools that perform tasks, trigger workflows, and surface insights. These tools often combine generative AI, retrieval‑augmented generation and basic automation. Calling them agents helps us think about them as building blocks of a broader agentic future.
From a market research and insights perspective, this broader definition is valuable because it aligns with how we work today. Consider a tool that analyses open‑ended responses and produces themes; another that checks questionnaire logic; or a script that fetches market data, summarises it and drafts a client‑ready report. Each automates a task and amplifies human capability. They may not plot their course, but they do work for us, and they are the start of our agent journey.
Adopting this inclusive terminology also encourages experimentation. It signals to colleagues and clients that agents are not science fiction; they are practical assistants we can use now. It also helps vendors and platforms market solutions under a standard banner, driving investment and innovation. As the technology matures, definitions will tighten, and more autonomous agents will emerge. For now, the key is to learn by doing.
Implications for the Insights Industry
The rise of agents will reshape our field. Here are a few implications:
- Efficiency and Productivity – Automation tools free researchers from repetitive tasks, allowing more time for thinking and consulting. Agents can clean data, run analyses, draft reports and generate charts. They also do it more consistently. Ten team members using an agent will be considerably more consistent than ten team members doing it their way; they will also be faster. The best might not be as good, but the worst will be less bad.
- Upskilling and Governance – As KPMG notes, leaders recognise the need to redefine performance metrics and upskill employees[1]. Researchers must learn to supervise agents, validate outputs and maintain ethical standards. Governance frameworks around data privacy, quality and regulatory compliance are essential[1].
- Human‑Agent Collaboration – Agents excel at routine or computational tasks; humans provide context, empathy and critical thinking. The most effective workflows will combine both, echoing the human‑in‑the‑loop principles highlighted by Thomson Reuters[4].
- Experimentation Culture, Deloitte’s prediction that half of generative‑AI adopters will pilot agents by 2027[3] suggests that experimentation is now expected. Insight teams should prototype agents, share lessons and develop best practices.
Conclusions and Next Steps
Agents are not a fad. Analysts from KPMG, Gartner, Deloitte and others agree that adoption is growing rapidly and will accelerate over the next few years[1][2]. At the same time, debates about what constitutes an agent illustrate the evolving nature of the field[4]. A pragmatic definition, viewing any AI‑powered tool that performs a task for you as an agent, is helpful today because it encourages adoption and experimentation.
As with any emerging technology, we must proceed thoughtfully. Trust, governance and ethical oversight remain critical[1]. For market researchers and insight professionals, the opportunity lies in learning to work alongside agents—testing their capabilities, understanding their limitations and integrating them into our workflows. The era of agents has begun; the question is not whether they will affect our industry, but how quickly and how well we will adapt.
As an essential first step, you and everybody in your organisation should look to create agents for every task they tend to do more than, say, three times a month. It won’t always be possible, but try to get into the habit of doing so. As you get better at simple agents, you will naturally start creating more complex agents and start stringing them together.
Want to know more about Agents?
I recently presented ‘Getting to Grips with Agents’ at a #NewMR webinar, and you can access the slides and recording by clicking here.
References
[1] KPMG AI Quarterly Pulse Survey Q2 2025
[2] Gartner (via Workday and World Economic Forum)
[3] Deloitte (reported in Computerworld)
[4] Thomson Reuters
[5] Workday