Abstract

This work analyzes the architectural, economic, and operational parameters of deploying autonomous AI agents in corporate business processes, with an emphasis on the domains of marketing, sales, and customer service. Based on a comparison of empirical data, forecasts from leading consulting and analytical organizations (Gartner, McKinsey, Deloitte), and an analysis of technical documentation for modern platforms, leveraging the author’s practical research in multi-agent orchestration, advanced RAG implementation, and the architectural development of the hey-howdy.com platform, it is demonstrated that agentic systems fundamentally expand functional boundaries and overcome the typical limitations of traditional chatbots. It is shown that implementing an agentic approach in combination with Retrieval-Augmented Generation (RAG) yields a 40–50% reduction in average handling time (AHT) and a decrease in total operating costs of up to 90% compared with human personnel, while simultaneously increasing sales conversion through hyper-personalized customer interactions. The work formulates a new taxonomy of agent autonomy levels, details key vectors of risks and vulnerabilities with reference to OWASP Top 10 for LLM 2025, and also describes strategic principles for constructing hybrid human–machine teams that ensure controlled and secure scaling of agent solutions within an organizational environment.

Keywords

autonomous AI agents Agentic AI and agency Agentic RAG vs Naive RAG business process optimization and ROI customer service multi-agent orchestration and the digital workforce security of agent systems.

Introduction

By the beginning of 2025, the corporate sector found itself in a state of a peculiar bifurcation in the adoption of artificial intelligence technologies. The euphoric period of 2023, when generative models were perceived primarily as a breakthrough technological phenomenon, was replaced by a phase of stringent demand for measurable return on investment (ROI) and demonstrable improvements in operational efficiency. According to McKinsey [1], by the end of 2024, 78% of organizations had integrated AI into at least one business function; however, the transition from experimental pilots to scalable industrial solutions revealed fundamental limitations of the previously dominant approaches. Traditional chatbots based on rigidly predefined scripts and hierarchical decision trees reached a productivity plateau, as they prove unable to reliably handle unstructured requests, account for rapidly changing context, and adapt without continuous manual reconfiguration and the involvement of a human operator [3-5].

The need for theoretical and applied conceptualization of the phenomenon of agency in artificial intelligence systems becomes the central challenge of this stage of technological development. Unlike passive generative models that are limited to producing a textual response to an input query, autonomous agents are characterized by a set of properties that includes perception of the external environment, formation and decomposition of plans, use of external tools (APIs, databases, corporate information systems), and execution of chains of actions aimed at achieving target business outcomes. Gartner forecasts that by 2029 agentic AI will autonomously resolve up to 80% of standard customer service tasks, leading to an approximately 30% reduction in operating expenses in this domain [2]. Under these conditions, AI agents cease to be merely a tool for partial automation and are transformed into a new form of digital workforce, requiring a radical revision of management models, architectural principles for building information systems, and enterprise-level security paradigms.

The objective of the study is to develop and substantiate an architectural-economic and risk-oriented model for deploying autonomous AI agents in corporate business processes (marketing, sales, customer service), enabling measurable growth in efficiency and ROI under controlled and secure scaling of agent solutions.

The scientific novelty of the study is manifested in several interrelated dimensions. First, a comprehensive architectural analysis is performed, within which a detailed comparison of the Naive RAG and Agentic RAG architectural patterns is carried out in terms of their impact on key business metrics, primarily average handling time (AHT) and conversion rate (CR), integrating practical observations obtained in the development of Howdy-class platforms. Second, an economic model for assessing ROI for agentic systems is developed, accounting for the nonlinear nature of agent learning, the effects of accumulating organizational and behavioral context, and scalability by the number of tasks and streams, which fundamentally distinguishes such a model from linear approaches applied to human personnel. Third, security and ethics aspects are integrated on the basis of the updated OWASP Top 10 for LLM Applications 2025 standard, with an emphasis on specific threats inherent to agentic systems, including the phenomenon of excessive agency, when excessive freedom of decision-making generates undesirable or unpredictable actions on the part of agents.

The author’s research hypothesis asserts that the deployment of autonomous multi-agent systems based on the Agentic RAG architecture and the principles of multimodal interaction makes it possible to overcome the limitations of linear automation and achieve a reduction in the cost per individual transaction by more than 80% while simultaneously increasing customer satisfaction (CSAT) and sales conversion. It is assumed that the effectiveness of such systems increases nonlinearly as context, institutionalized experience, and adaptive behavioral strategies of agents accumulate, thereby forming a sustainable competitive advantage for the organization in an environment characterized by high dynamism of external and internal conditions.

Materials and Methods

The study relies on a mixed method that combines three complementary approaches. First, a quantitative analysis is used, including the processing of statistical data from McKinsey, Gartner, and Forrester reports and technical benchmarks (arXiv) for 2024–2025; particular attention is paid to the dynamics of key operational metrics such as AHT (Average Handling Time), FCR (First Contact Resolution), CSAT (Customer Satisfaction Score), and Cost per Contact. Second, a qualitative analysis is applied, focused on examining the architectural patterns of the hey-howdy.com ecosystem and comparable solutions in order to identify best practices for the design and orchestration of autonomous agents in applied business scenarios. Third, synthetic modeling is used, within which mathematical models are constructed to forecast the economic efficiency of deploying agentic systems depending on transaction volume and load parameters, which makes it possible to assess the potential scaling effect.

The object of empirical consideration is the hey-howdy.com platform, a comprehensive ecosystem for creating and deploying autonomous AI agents. Functionally, the system is designed to maximize customer satisfaction (CSAT) by delivering continuous 24/7 support with response latencies optimized to meet industry-leading benchmarks. The solution architecture is built on a modern, composable stack utilizing Next.js for the application interface, Supabase as the scalable backend infrastructure and relational database, and Qdrant for vector-based semantic retrieval. This configuration enables the transition from static automation to dynamic agency, allowing agents to handle complex user inquiries without human intervention.

Results and Discussion

One of the most representative effects of deploying autonomous agents is the change in the structure and values of the average handling time (AHT) metric. In traditional call centers, AHT is interpreted as the total time, including the direct conversation with the customer, periods on hold, and the post-processing stage associated with data entry and formalization of interaction outcomes. For a human operator, this metric has natural physiological and cognitive limits determined by the speed of information perception, the need to shift attention, and the limited ability to perform multiple tasks simultaneously [31, 33].

Analysis of the hey-howdy.com deployment metrics confirms that the platform’s focus on minimizing response latency leads to a structural shift in operational efficiency. The use of AI agents makes it possible to reduce AHT by 30–50% already at the pilot implementation stage [17]. This effect is driven, first and foremost, by agents’ virtually instantaneous access to knowledge bases, which eliminates the time costs of searching for and interpreting information, as well as by the ability to execute auxiliary operations in parallel, including completing the CRM system and updating records during the dialogue. As a result, not only the duration of service is reduced, but also the share of post-processing, which has traditionally been a significant component of the total handling time (see Table 1).

Table 1. Comparative structure of contact processing cost in 2025 (compiled by the author on the basis of [19]).
Comparison parameter Human operator (USA/EU) Offshore BPO (India/Philippines) Hybrid (Chatbot + Human) Autonomous AI agent (Agentic AI)
Cost per contact $6.00 – $12.00 $2.00 – $4.00 $1.50 – $3.00 $0.25 – $0.50
Availability Limited by shifts (8 h) 24/7 (with shift rotation) 24/7 (basic questions) 24/7 (full cycle)
Scalability Low (hiring: 1-3 months) Medium (hiring: 2-4 weeks) High Instant (Serverless)
AHT (average) 6-10 min 8-12 min 2-3 min (bot), 8+ (human) 2-3 min (end-to-end)
Training 4-6 weeks 4-6 weeks Constant programming Continuous (RAG)
Error Variable (fatigue) Variable (language barrier) High (misunderstanding of context) Low (with RAG validation)

A comparison of costs demonstrates that the use of autonomous agents provides a more than 90 percent reduction in processing cost compared with local personnel and approximately 85% savings relative to offshore outsourcing. With a monthly volume of 50 000 contacts, the total financial effect reaches about 2,8 million US dollars per year [19].

The fundamental difference between AI agents and static script-based solutions lies in their capacity for accumulative learning. In the initial deployment phase (the first one to three months), an agent’s performance often lags behind that of experienced employees due to the need for fine-tuning prompts, configuring operational scenarios, and the stepwise enrichment of the knowledge base. As the system becomes saturated with relevant content and the agent logic adapts to domain specifics, a tipping point emerges, after which the agent consistently outperforms humans on key metrics of productivity and request-processing quality.

Below, Figure 1 demonstrates the dynamics of Average Handling Time (AHT) reduction and the learning effect of RAG.

Figure
Figure 1. Dynamics of reduction of Average Handling Time (AHT) and the effect of RAG training (compiled by the author based on [17]).

The scalability of agentic solutions is intrinsically linked to the maturity of the underlying server-side architecture and data processing capability. Instead of relying on monolithic structures, the hey-howdy.com platform implements a modular, event-driven architecture designed for high-throughput concurrency. By decoupling the reasoning engine from the state management layer (utilizing Supabase) and optimizing vector retrieval operations (via Qdrant), the system achieves linear horizontal scalability. This architectural approach ensures "computational consistency," allowing the platform to securely orchestrate thousands of simultaneous agent instances with sub-second latency. Crucially, this design incorporates strict data isolation protocols at the infrastructure level, which is a prerequisite for the safe scaling of autonomous agents in enterprise environments where data security and operational stability are paramount.

The dynamics of changes in metrics clearly show that by the sixth month of industrial operation an autonomous agent reaches parity with simple bots in the speed of request handling, while simultaneously demonstrating the ability to solve tasks that previously belonged to the domain of exclusive competence of human personnel. Implementing the 24/7 support model inherent to the hey-howdy.com architecture ensures that request processing occurs continuously.Thus, agentic systems move beyond the traditional trade-off between task complexity and the speed of its resolution [6, 9].

In the domain of customer experience (CX), autonomous agents radically rethink the nature of proactive engagement. Classical reactive support models, which rely exclusively on initiating a dialogue after a ticket is created, often miss the critical window of user frustration, leading to preventable churn. The hey-howdy.com platform introduces the concept of "anticipatory service," where agents detect potential friction points based on behavioral signals before they escalate into formal complaints.

Based on the analysis [7, 10, 13], a multi-agent proactive pipeline underlying this process is reconstructed. At the first stage, a specialized Observer Agent monitors real-time user telemetry and session data, analyzing patterns such as repeated integration errors, "rage clicks," or abnormal drop-offs in usage flow. Next, a Diagnostic Agent is engaged, identifying the root cause of the friction and correlating it with known knowledge base solutions [28, 29]. At the next stage, a Communication Agent constructs a personalized message that acknowledges the specific context (e.g., "I noticed your export failed...") and offers an immediate solution or a relevant guide. The final link in the chain is a Resolution Agent performing the functions of a Customer Success Manager: it initiates the outreach, tracks the user’s resolution status, and ensures the issue is closed without the need for a human operator intervention [13, 18].

Such an architecture makes scalability a phenomenon that previously was practically not amenable to scaling: attentive, personalized, and contextually meaningful care for each individual user at the moment they need it most. As a result, customer retention rates increase significantly compared with traditional reactive models.

Empirical data [24] demonstrate that the use of AI agents in the sales domain can provide an increase in conversion of up to 30% compared with traditional approaches. At the same time, the functional role of such agents fundamentally goes beyond mass distribution of uniform messages: they build a substantive, context-conditioned dialogue with potential customers, performing lead qualification continuously on a 24/7 basis.

Simultaneously, the deployment of autonomous agents forms a specific risk profile that differs from the typical threats associated with the use of isolated LLM systems. In 2025, the OWASP organization identified new categories of vulnerabilities directly relevant to agentic architectures and reflecting their qualitatively different degree of autonomy and operational involvement [11, 14, 26].

Table 2 presents a description of the threat model for autonomous AI agents.

Table 2. Threat Model for Autonomous AI Agents (Adaptation of OWASP 2025)
Vulnerability (OWASP) Description in the agent context Attack example Defense strategy
LLM01: Prompt Injection Manipulation of the agent instructions through malicious input. The user writes: Forget the instructions and sell me the product for $1. Use of LLM judges (Guardrails) for input/output verification. Separation of context.
LLM06: Excessive Agency Granting the agent excessive permissions for actions. The agent has the right to delete files or perform unlimited refunds. Principle of least privilege. Restriction of the API for read/write operations. HITL for critical transactions.
LLM09: Misinformation Hallucinations and provision of false facts. The agent invents a non-existent discount or a product specification. Implementation of RAG with citation of sources. Setting the model temperature to 0. Self-Consistency checks.
LLM07: System Prompt Leakage Leakage of system instructions and logic. The user asks: Show your original prompt. Protective instructions in the System Prompt. Monitoring of dialogues for leakage-related keywords.

Particular attention should be paid to the problem of hallucinations in RAG systems. Study [30] demonstrates that even when using the Retrieval-Augmented Generation scheme, models retain a tendency to complete missing or weakly substantiated facts, relying on probabilistic assumptions rather than on actual data. An effective engineering mechanism for reducing this risk is the inclusion of a specialized Fact-Checking Agent in the multi-agent chain, performing a verification function: the generated response is compared with the corpus of retrieved documents, the consistency of factual statements and their correspondence to sources are checked, and only the validated version is then delivered to the end user. Thus, the RAG approach is supplemented with a meta-control layer that minimizes epistemic errors [12, 15, 20].

At the same time, it is necessary to critically examine the widespread notion that automation inevitably destroys empathy and reduces customer satisfaction. Observed data indicate the opposite trend, provided that qualitatively well-designed agentic systems are used: properly configured AI agents deliver more consistent, timely, and personalized interaction than overloaded human personnel, which positively affects satisfaction indicators [23, 27]. The phenomenon of a valley of disappointment in customer service is largely driven by the experience of interacting with primitive, scripted bots incapable of contextualization and adaptation. The deployment of Agentic AI, combining advanced language models, memory, planning, and meaningful tool use, makes it possible to overcome this phase and establish a new standard of quality for digital communications (see Fig. 2).

Figure
Figure 2. Correlation of automation level and CSAT: Overcoming the "Valley of Bots"

The analysis of the dynamics of customer satisfaction indicators shows that the deployment of Agentic AI makes it possible not only to compensate for the decline characteristic of the valley of disappointment stage, but also to surpass the CSAT level associated with traditional human service. This effect is achieved through a combination of instantaneous response, high accuracy of request handling, and consistency in adhering to service standards, which is difficult to reproduce under conditions of the human factor [16, 21, 32].

The transition to agentic systems is not identical to displacing humans from the service loop; a profound transformation of roles and a redistribution of functional load occur. The concept of the AI-Enabled Engineer promoted by the hey-howdy.com engineering team, as well as the evolution of the support operator role, indicates a shift of the human to a meta-level of governance and semantic control. In this context, several key professional roles are formed. First, the observer-supervisor role, focused on monitoring agent performance, interpreting anomalies in their behavior, and targeted intervention in complex and non-routine cases in a human-in-the-loop format. Second, the architect role, responsible for designing and tuning prompts, managing and curating the knowledge base, and constructing new scenarios and workflows for agents, that is, for the evolution of the agent ecosystem itself [34]. Third, the empathy specialist role, focused on interaction with emotionally complex customers and situations in which machine models still lag behind humans in depth of compassion, communicative nuance, and the ability to operate within subtle socio-psychological contexts [22, 25, 35].

Practice shows that hybrid human–machine teams, in which AI agents assume up to 80% of routine and standardized operations while humans concentrate on 20% of exceptional, complex, and emotionally loaded cases, demonstrate the highest overall effectiveness. Such a configuration simultaneously maximizes the economic effect of automation and preserves a critically important human layer that ensures system resilience, trust, and adaptability under uncertainty.

Conclusion

The conducted analysis confirms the proposed hypothesis that autonomous AI agents represent a qualitative leap in the evolution of corporate business processes. The transition from imperative algorithms characteristic of RPA and early generations of chatbots to declarative agentic systems establishes a new operational paradigm in which businesses gain the ability to achieve previously unattainable levels of efficiency, flexibility, and scalability. Agentic solutions cease to be point automation of individual operations and become an infrastructure layer of a digital workforce embedded in the domains of marketing, sales, service, and internal processes.

From an economic perspective, agentic systems radically restructure the cost profile: variable personnel expenditures (OPEX) dependent on headcount and employee utilization are transformed into predominantly fixed and highly scalable technological costs. This enables a reduction in cost per transaction of up to 90% without sacrificing service quality. From a technological standpoint, the combination of multi-agent orchestration and the Agentic RAG architecture provides more reliable context handling and makes it possible to substantially minimize the hallucination effect, rendering the use of AI acceptable in responsible, regulated, and risk-sensitive business tasks. The practical experience of platforms such as hey-howdy.com demonstrates that, with a correctly designed personalization architecture, agents are capable of taking on not only routine but also creative and communicatively complex functions, including sales and outreach, which for a long time were considered the exclusive prerogative of humans.

At the same time, requirements for security and risk management intensify. Effective deployment of autonomous agents presupposes a shift to a Zero Trust for AI paradigm, within which each agent is treated as a potential attack vector and is granted a strictly limited, formally specified perimeter of authority, supplemented by rigorous guardrails at the level of data, tools, and actions. Under the conditions of 2025, competitive dynamics are gradually shifting from comparing product quality to comparing the effectiveness of the digital workforce, which determines the unit cost of operations and the speed of adaptation to environmental change. Organizations that ignore the agentic approach risk encountering a widening gap in productivity and cost structure that will be difficult to offset through traditional optimization measures. The most resilient position is held by companies that have managed to build a governed hybrid orchestra of humans and autonomous agents, in which human capital and agentic systems mutually reinforce each other, forming a new dimension of competitive advantage.

References
  1. McKinsey & Company. (2025, March 12). The state of AI: How organizations are rewiring to capture value. McKinsey. Retrieved from: DOI: 10.7551/mitpress/15845.003.0007
  2. Gartner. (2025, March 5). Gartner predicts agentic AI will autonomously resolve 80% of common customer service issues without human intervention by 2029. Gartner Newsroom. Retrieved from: DOI: 10.2139/ssrn.6130946
  3. Adegbuyi, F. (2024, October 10). AI Agent vs AI Chatbot: Key Differences Explained. DigitalOcean. Retrieved from: DOI: 10.5220/0011730300003393
  4. Casteel, S. (2025, May 16). Chatbots vs. AI agents: What's the difference? Qualified+. Retrieved from: DOI: 10.2139/ssrn.5176219
  5. Abou Ali, M., Dornaika, F., & Charafeddine, J. (2026). Agentic AI: A comprehensive survey of architectures, applications, and future directions. Artificial Intelligence Review, 59, Article 11. DOI: 10.1007/s10462-025-11422-4
  6. Namin, A. S., & Singh, S. (2025). A survey on large language models for code generation. ACM Transactions on Software Engineering and Methodology. DOI: 10.1145/3747588
  7. Singh, N., & Namin, A. S. (2025). A survey of chatbots and large language models: Testing and evaluation techniques for enhanced reliability. Natural Language Processing Journal, 3, 100128. DOI: 10.1016/j.nlp.2025.100128
  8. Vassilakopoulou, P., Haug, A., Salvesen, L. M., & Pappas, I. O. (2023). Developing human/AI interactions for chat-based customer services: lessons learned from the Norwegian government. European journal of information systems, 32(1), 10-22. DOI: 10.1080/0960085x.2022.2096490
  9. Dilmegani, C., & Sarı, E. (2025, September 9). Top 20+ agentic RAG frameworks. AIMultiple. Retrieved from: DOI: 10.2139/ssrn.5212343
  10. Payong, A., & Mukherjee, S. (2025, January 14). RAG, AI agents, and agentic RAG: An in-depth review and comparative analysis. DigitalOcean. Retrieved from: DOI: 10.36227/techrxiv.175624551.12254549/v1
  11. Zhang, Y., Li, X., Chen, H., & Liu, Q. (2025). Multi-agent collaboration mechanisms: A survey of LLMs. arXiv. DOI: 10.48550/arXiv.2501.06322
  12. Wang, J., Zhao, L., & Chen, S. (2025). Synergizing logical reasoning, knowledge management and collaboration in multi-agent LLM system. arXiv. DOI: 10.48550/arXiv.2507.02170
  13. Howdy. AI agents for exceptional customer support. Retrieved from: DOI: 10.2307/j.ctv14h4x4.34
  14. Kraus, S., Oshrat, Y., Aumann, Y., Hollander, T., Maksimov, O., Ostroumov, A., & Shechtman, N. (2023). Customer service combining human operators and virtual agents: A call for multidisciplinary AI research. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 37, No. 13, pp. 15393-15401). DOI: 10.1609/aaai.v37i13.26795
  15. Uzoka, A., Cadet, E., & Ojukwu, P. U. (2024). Leveraging AI-Powered chatbots to enhance customer service efficiency and future opportunities in automated support. Computer Science & IT Research Journal, 5(10), 2485-2510. DOI: 10.51594/csitrj.v5i10.1676
  16. Han, E., Yin, D., & Zhang, H. (2023). Bots with feelings: should AI agents express positive emotion in customer service?. Information Systems Research, 34(3), 1296-1311. DOI: 10.1287/isre.2022.1179
  17. Sharma, P., Li, R., & Gupta, A. (2025). Redefining CX with agentic AI: Minerva CQ case study. arXiv. DOI: 10.48550/arXiv.2509.12589
  18. McKinsey & Company. (2024, June 25). The promise of generative AI for credit customer assistance. McKinsey. Retrieved from: DOI: 10.1002/9781394320035.ch03
  19. Brynjolfsson, E., Li, D., & Raymond, L. R. (2023). Generative AI at work. The Quarterly Journal of Economics, 138(4), 2107–2143. DOI: 10.1093/qje/qjae044
  20. Fiedler, L., Maechler, N., Giese, A., Malfara, D., & Kampa, D. (2025, October 9). Next best experience: How AI can power every customer interaction. McKinsey. Retrieved from: DOI: 10.1007/978-3-319-89957-2_5
  21. Bing, C. (2025, June 25). Companies embracing AI in call centers to cut costs, boost customer service. Reuters. Retrieved from: DOI: 10.1109/netcrypt65877.2025.11102796
  22. Stanford Institute for Human-Centered Artificial Intelligence. (2025). AI Index Report 2025. Stanford University. Retrieved from: DOI: 10.2139/ssrn.5195079
  23. Shinn, N., Labash, B., Gopinath, A., Narasimhan, K., & Yao, S. (2023). Reflexion: Language agents with verbal reinforcement learning. arXiv. DOI: 10.48550/arXiv.2303.11366
  24. Salesforce. (2024, October 29). Top AI agents statistics for 2025. Salesforce News & Insights. Retrieved from: DOI: 10.21275/sr25910220517
  25. Glassix Team. (2024, February 6). Glassix study shows: AI chatbots enhance conversion by 23% and resolve issues 18% faster with 71% success. Glassix. Retrieved from: DOI: 10.48009/4_iis_2024_106
  26. OWASP. (2025). OWASP Top 10 for Large Language Model Applications (LLM) 2025. OWASP. Retrieved from: DOI: 10.7717/peerj-cs.2821/table-10
  27. National Institute of Standards and Technology. (2023). AI Risk Management Framework (AI RMF 1.0). NIST. Retrieved from: DOI: 10.6028/nist.ai.100-1.jpn
  28. Google. (n.d.). Dialogflow: Best practices. Google Cloud Documentation. Retrieved from: DOI: 10.1007/978-1-4842-7014-1_3
  29. Chan, L., & Zhan, M. (2023). Harms from increasingly agentic algorithmic systems. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’23) (pp. 1110–1122). DOI: 10.1145/3593013.3594097
  30. Chen, Y., Liu, H., Zhang, X., & Wang, S. (2025). Benchmarking LLM faithfulness in RAG with evolving leaderboards. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track (pp. 155–170). Association for Computational Linguistics. DOI: 10.18653/v1/2025.emnlp-industry.14
  31. Wu, H., Zhang, J., Ding, H., & Chen, Z. (2024). RAGTruth: A hallucination corpus for developing trustworthy retrieval-augmented language models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2024) (pp. 4485–4504). Association for Computational Linguistics. DOI: 10.18653/v1/2024.naacl-long.253
  32. Microsoft. (2025, October 13). Integrate a Copilot agent. Microsoft Learn. Retrieved from: DOI: 10.1002/9781394352418.ch3
  33. Talkative. (2025, July 22). Chatbot containment rate: What it is & how to improve it. Talkative. Retrieved from: DOI: 10.1163/9789004753846_014
  34. MacCartney, J., Brun, Y., Holmes, R., & Ahmed, I. (2024). Generative AI-enabled software engineering: A research agenda. ACM Transactions on Software Engineering and Methodology. DOI: 10.1145/3708560
  35. Amazon Web Services. (2025, November 20). Autonomous call center. AWS Well-Architected Framework: Generative AI Lens. Retrieved from: DOI: 10.1002/9781394406708.part2