ブログ
Legal Considerations for AI Agents: AI Governance —From the Necessity of AI Governance to Methods and Practice—
2026.05.12
Introduction
The prior blog (the fourth in the series, posted on 9/10/2025) focused on the usage phase of AI agents, organized the legal and practical risks inherent in inputs and outputs, and explained the need to visualize “what is input and what is output,” and to operate such systems by appropriately combining human involvement and technical controls. AI agents are spreading quickly. According to a joint survey by MIT Sloan Management Review and Boston Consulting Group, within only two years of their emergence, 35% of companies have already adopted AI agents, and 44% of companies plan to adopt AI agents in the near future[i]; thus, it is expected that the number of companies interested in the development and use of AI agents will continue to increase.
In previous blogs, I have examined the legal risks of AI agents from a static perspective by organizing individual issues; however, for companies that develop or use AI agents, the question arises as to how to manage the risks of AI agents in day-to-day operations. This blog addresses such risk management or governance of AI agents[ii].
Key Points of This Blog
- “Defensive” and “Offensive” Aspects of AI Governance
AI governance has two aspects. The “defensive” aspect manages risks associated with autonomy. The “offensive” aspect improves response accuracy and service value through human intervention and feedback.
- Methods of AI Governance – “Agile Governance”
The “AI Business Operator Guidelines (Version 1.1)” issued by the Ministry of Internal Affairs and Communications and the Ministry of Economy, Trade and Industry recommend the adoption of “Agile Governance”. Under this approach, feedback loops run continuously and bidirectionally between management and operational levels. This replaces fixed rule-based operations and helps company respond to rapid changes in technology and the social environment.
- Practice of AI Governance
Companies first analyze background framework and relevant risks. They then decide whether to develop, provide, or use AI agents.. Where they proceed, they combine three types of measures: technical measures based on risk evaluation, legal measures such as Human-in-the-loop (human involvement), and organizational measures to ensure that feedback loops function effectively.
Necessity of AI Governance
(1) Defensive and Offensive Aspects of AI Governance
As stated in the prior 9/10/2025 blog, with respect to outputs at the usage stage of AI agents, outputs are not constant even with identical inputs and inevitably fluctuate; moreover, inputs may require examination in relation to various laws and regulations.
In light of these characteristics of AI agents, AI governance is necessary in two senses. That is, (i) a defensive meaning of managing the risks of AI agents, and (ii) an offensive meaning of improving the quality of AI agents. The report of the survey jointly conducted by MIT Sloan Management Review and Boston Consulting Group, as mentioned above, also points out that, unless appropriate governance of AI agents is established, there is a risk of legal violations, low-quality outputs, and adverse impacts on operations (aspect (i)), whereas, if AI agents are appropriately managed, it becomes possible to expand high-quality capabilities beyond human constraints (aspect (ii))[iii].
Specifically, with respect to (i) managing the risks of AI agents, as stated in the prior 9/10/2025 blog, there are legal risks associated with both inputs and outputs at the usage stage, and it is therefore necessary to control these risks. In addition, companies developing AI agents must control not only risks at the usage stage but also, as stated in the second and third blogs, risks in development contracts and risks arising from accidents caused by their own AI agent products.
Next, with respect to (ii) improving the quality of AI agents, consider, for example, a case in which, as part of AI governance, a process is adopted whereby AI agent outputs are not used as-is, but are reviewed through human judgment. In such cases, where errors or low quality are identified in AI agent outputs, feedback can be provided, leading to improvements in the quality of the AI agent. On the other hand, if appropriate AI governance is not implemented and risk evaluation is conducted only at specific points in time, such as at the time of introduction, opportunities to improve the quality of AI agents will be lost. It can be said that implementing AI governance has the potential to enhance the value provided by AI agents. Furthermore, as has been pointed out in the comment, “the degree of compliance with AI ethics constitutes the quality of AI” [iv], it can also be said that appropriate AI governance may increase the likelihood that a company’s products and services will be accepted in the market.
Accordingly, AI governance is necessary for AI agents from both defensive and offensive perspectives.
(2) Companies Requiring AI Governance
From the meaning of AI governance described above, AI governance is necessary not only for companies that develop and provide AI agents but also for companies that use AI agents. The “AI Business Operator Guidelines (Version 1.1)” of the Ministry of Economy, Trade and Industry (hereinafter referred to as the “AI Business Operator Guidelines”) also target all persons involved in the development, provision, and use of AI in various business activities. The Guidelines further point out that “it is important to establish AI governance that manages risks related to AI at a level acceptable to stakeholders while maximizing the benefits derived therefrom” [v].
However, it does not follow that all entities involved in the development, provision, and use of AI are required to establish AI governance at the same level. From the perspective of risk management, AI developers and AI providers, by virtue of developing or providing AI, have a greater impact on society as a whole[vi], whereas AI users do not necessarily have a significant impact on society. In addition, from the perspective of improving quality, the required content of quality and the measures that can be taken differ depending on whether the service being provided is the development, provision, or use of AI. It is necessary for each entity to establish optimal governance based on the concept of AI governance according to its respective position.
Methods of AI Governance
In AI governance, various factors surrounding AI agents may change.
For example, for companies using AI, various AI agent tools are emerging, and it is expected that not only the tools currently in use but also those to be used in the future will change.
In addition, societal perceptions of AI agents are also expected to change.
Furthermore, for example, assuming an AI agent for sales outsourcing, it is conceivable that multiple entities are involved in the value chain of a single AI agent, including (i) the company that actually uses the AI agent for sales activities, (ii) the company that provides the AI agent system, (iii) the company that develops the AI agent system, and (iv) the company that provides the foundational model for the AI agent system. Accordingly, changes occurring in one entity within the value chain may propagate and affect many entities. [vii]
As described above, because various factors surrounding AI agents may change, governance that responds on the premise of such changes is required. Therefore, the AI Business Operator Guidelines recommend “Agile Governance,” in which a dual-loop structure consisting of a feedback loop at the management level and a feedback loop at the operational level is continuously operated, from the perspective of establishing a system that continuously updates the content of governance.
Practice of AI Governance
(1) Environmental and Risk Analysis
In considering the introduction of AI agents, it is necessary to understand, in light of the background framework in which the company operates, what benefits exist and what risks are present.
In risk assessment, perspectives such as the human-centered principle, safety, and fairness, which are set forth as “common guidelines” in the AI Business Operator Guidelines, serve as useful references. In addition, under the EU AI Regulation, AI that may lead to discrimination is classified as “unacceptable AI,” and AI that performs profiling of individuals is classified as “high-risk AI”; such classifications of risk by AI category also provide useful reference points.
In environmental assessment, it is desirable to examine not only the internal environment of the company but also the perspectives of stakeholders who may be affected by the development, provision, and use of AI agents.
Based on such environmental and risk analyses, a determination is made as to whether to proceed with the development, provision, and use of AI agents.
(2) Formulation of Policies
Based on environmental and risk analysis, the feasibility of developing, providing, and using AI agents is first considered. If proceeding with implementation, it is conceivable to establish the goals of AI governance through the formulation of an AI policy. Formulating and publishing an AI policy serves as both an external and internal signal of the company’s serious stance. Moreover, an AI policy is not something that ends once established; rather, it serves as a guideline for future AI utilization and should be continuously revised and improved in response to changes in the environment.
With respect to the specific content to be incorporated into policies, although this largely depends on each company’s policy, it is conceivable, with reference to the AI Business Operator Guidelines, to include values to be respected in the development, provision, and use of AI agents, such as the human-centered principle, how systems to realize such values will be established, and what value will be provided to customers and society through the development, provision, and use of AI agents.
(3) System Development and Operation
In specific system development and operation, the key point is to establish a cycle in which risks are assessed, and based thereon, responses are determined and implemented.
First, for example, in the case of a business operator using AI agents, the benefits and risks are evaluated for each individual AI tool and method of use. Risk evaluation is generally conducted by multiplying “severity” by “probability of occurrence.” Next, it is determined what measures should be taken in response to the evaluated risks. The following four types of responses may be considered [viii]:
1. Risk Retention: proceeding with the project while accepting the risk
2. Risk Mitigation: reducing risk through organizational rules, technical measures, and system development
3. Risk Transfer: transferring or sharing risk with third parties through contracts or insurance
4. Risk Avoidance: abandoning the project itself where the risk is excessively high
(4) Case Study of Operation: Use in Personnel Evaluation
To form a concrete image, consider as an example the use of AI agents in “personnel evaluation.” To keep the case simple, I focus on an “ethical risk”: bias in AI agent outputs that produces unfair evaluations.
With respect to ethical risk, it is important, through technical testing and legal examination, to identify the “severity” and “probability of occurrence” of the identified risks. From the perspective of severity, the risk associated with personnel evaluations lacking fairness can be significant. On the other hand, the probability of occurrence is dependent on the results of technical testing. Where the use of AI agents in personnel evaluation proceeds based on such a risk evaluation, the following mitigation measures may be considered with respect to risks of bias and fairness.
- For example, from a technical perspective, it is conceivable to remove bias or prevent problematic outputs through measures such as system prompts, fine-tuning, and the setting of guardrails.
- From a legal perspective, it is conceivable to establish rules to cover aspects that cannot be addressed technically, and to develop rules such as requiring human confirmation and correction (“Human-in-the-loop”).
- Furthermore, from an organizational perspective, it is conceivable to conduct periodic audits to ensure that fairness issues are not arising, and to establish systems that enable reporting and improvement from operational sites.
These responses involve intertwined elements of technology, legal affairs, and business. It is essential to form cross-functional teams as a single department cannot address them independently.
Conclusion
In this blog, I have explained the necessity and practical methods of “AI governance” for AI agents. AI agents possess characteristics such as autonomously executing tasks and producing outputs that may vary in response to learning and environmental changes. Accordingly, a governance system as a continuous “line (process),” in which risks are continuously monitored, evaluated, and improved across the organization, is indispensable.
In subsequent installments of this blog series, based on discussions in the Ministry of Economy, Trade and Industry’s “Study Group on the Ideal Form of Civil Liability in AI Utilization” [ix], Iwill explain AI agents and civil liability.
[i] S. Ransbotham et al., The Emerging Agentic Enterprise, MIT Sloan Management Review & Boston Consulting Group, Nov. 2025.
[ii] There is no clearly established definition of AI risk management or AI governance; however, because the terms are sometimes used interchangeably, this blog treats both as referring to how risks associated with AI are managed.
[iii] Supra note i, at 10.
[iv] Yuichiro Kuwano et al., Utilization of Generative AI and Legal Affairs Learned Through Consultation Cases (Yuhikaku, 2025), 25.
[v] Ministry of Economy, Trade and Industry, AI Business Operator Guidelines (Version 1.1) (Mar. 28, 2025), 4, 27.
https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/pdf/20250328_1.pdf
[vi] Supra note v, at 30, 35.
[vii] Hiroki Habuka, Introduction to AI Governance (Hayakawa Shinsho, 2023), 121–122.
[viii] Supra note vii, at 131–134.
[ix] https://www.meti.go.jp/shingikai/mono_info_service/ai_utilization_civil/001.html
Member
PROFILE
