
The rapid advancement of Artificial Intelligence (AI) technology is fundamentally reshaping the way humans communicate with machines. In particular, Generative AI and Large Language Models (LLMs), which emerged in 2022, have evolved beyond merely responding to user commands. They have established themselves as powerful tools that assist human intellectual labor by generating creative outputs such as text, images, and code. However, the recent technological trend is once again surpassing these generative models. This is evident in the emergence of Agentic AI, which is equipped with an autonomous cognitive loop that sets its own goals, perceives its environment, formulates plans, executes actions, and reflects on the outcomes.
Agentic AI is defined as an AI system with autonomous and goal-oriented characteristics, possessing the ability to respond to its environment, perform tasks self-directedly, and dynamically adjust its strategies to achieve goals (Hosseini & Seilani, 2025; Sapkota et al., 2026). The advent of Agentic AI establishes a new relationship that cannot be explained by the dichotomous schema of “human as subject, machine as tool,” which has been premised in existing communication studies and Human-Computer Interaction (HCI) research. Traditional communication models have defined media merely as neutral channels or intermediaries through which messages are transmitted. From this perspective, humans are the sole subjects who produce and interpret meaning, while technology has been nothing more than a means to efficiently deliver or store it. Classical theories in the field of HCI have also focused on usability and control, where users govern the system with clear goals. Norman’s (2013) Gulf of Execution and Evaluation model emphasizes that the core of interface design lies in helping users accurately understand the system’s state and manipulate it as intended.
However, Agentic AI demonstrates a new aspect of interaction that is difficult to explain from such an instrumental perspective. Agentic AI infers the user’s intentions to define tasks on its own, and solves problems by using external tools (Application Programming Interface: API) or collaborating with other agents without human intervention. The relationship between humans and AI is now evolving from one of instruction and execution—entering clear commands and checking the results—into one of collaboration and partnership, where both parties interpret and coordinate each other’s intentions to co-create outcomes. This suggests that AI should be redefined not as a simple tool, but as a communication partner or actor with social presence.
The era of Agentic AI demands a new communication paradigm. While traditional HCI models emphasized system transparency and controllability, in an Agentic AI environment that learns and judges autonomously, internal system opacity (black box) and the uncertainty of results inevitably increase. This shift has a profound impact on how users accept AI’s judgments. Users face a phase where they either blindly trust the outputs generated by AI or, conversely, excessively doubt them, which raises issues of trust and relationship that cannot be resolved through mere technical improvements.
In this context, communication between users and AI becomes even more crucial. Users need to understand the operating principles and decision-making processes of AI, and AI must build trust through interaction with users. Therefore, this article argues that by elevating Agentic AI from an object of control to a subject of communication and cooperation, communication in the age of artificial intelligence must expand beyond technical optimization into forming new social relationships where humans and machines coexist. From this perspective, it aims to serve as a catalyst for developing new theories and improving models. Through this, we will be able to explore better modes of interaction and design a future where technology and humans achieve harmony.

댓글 남기기