ブログ
AI and Civil Liability in Japan (Part 1): What Matters in Litigation When AI Supports Human Decisions
2026.03.13
Key Takeaways
- METI's Draft Guidance classifies AI-related civil liability into two categories: "assistance/support" and "reliance/substitution." Most current AI systems fall into the assistance/support category. AI agents, depending on which parts of a business process they automate, can belong to either category.
- In the assistance/support category, AI users need to verify the accuracy and appropriateness of AI outputs. The Draft Guidance takes the position that causation between the developer/provider's conduct and third-party harm is generally negated by the user's intervening decision. The commentary cited for this position, however, has a narrower scope than the Draft Guidance suggests. AI developers and providers should not rely too heavily on this causation argument and should instead focus on measures that negate negligence.
- Negligence is assessed based on the circumstances at the time of the conduct—not in hindsight. AI-related technology and business conditions change rapidly. A party is not found negligent simply because the action later proves inadequate.
- The Draft Guidance acknowledges the possibility of a de facto presumption of negligence, but its application in the AI context is likely to be very limited. Even in medical malpractice and environmental litigation, courts have applied this doctrine only in narrow circumstances. Compared to medical practice, the AI field has no established uniform standard of care. A broad presumption of negligence is therefore difficult to justify.
- AI users, developers, and providers should document their decision-making processes. AI users, developers, and providers should record the rationale behind their decisions, the factual assumptions they relied on, and the verification and response frameworks they put in place. Such documentation will help negate a finding of negligence.
Background
The Draft Guidance and Its Two-Category Framework
The Ministry of Economy, Trade and Industry ("METI") convened a study group to examine civil liability arising from AI use. The group has been analyzing the responsibilities of AI users, developers, and providers.[i] Its findings have been compiled in the Draft Guidance on the Interpretation and Application of Civil Liability in AI Use (the "Draft Guidance"), which is currently open for public comment.[ii] The Draft Guidance is not legally binding. It does, however, represent the Japanese government's first systematic effort to address civil liability in the AI context and is expected to influence practice going forward.
The Draft Guidance divides AI use into two categories.[iii] The first is "assistance/support." Here, AI outputs serve only to assist the user's own judgment, and a human decision or action is expected to intervene before any final outcome. The second is "reliance/substitution." Here, AI is provided on the premise that it will replace all or part of human judgment, and the user relies on AI's determinations.
Based on this two-category framework, the Draft Guidance analyzes civil liability—primarily tort liability—through a series of hypothetical scenarios.
AI Agents in This Framework
AI agents can fall into either category depending on which parts of a business process they automate. The Draft Guidance notes that liability for AI agents depends heavily on the specific technology and use case, making it difficult to set out a general framework at this stage. The basic framework in the Draft Guidance, however, can serve as a foundation for examining more advanced systems such as AI agents.[iv]
Parties and Claims
When AI-related harm occurs, three sets of parties are potentially involved: the AI user, the AI developer/provider, and the injured third party. Claims can flow in three directions: from the third party to the user, from the third party to the developer/provider, and from the user to the developer/provider. Where the parties have a direct contractual relationship, breach of contract is the primary basis for a claim. Where they do not, tort liability is the primary basis.[v]
Scope of This Article
This article focuses on the assistance/support category and examines tort liability in particular. Part 2 will address the reliance/substitution category. The author has experience as both a judge and litigation counsel, and this article draws on that experience in its analysis.
Tort Liability Under Japanese Law
No established body of case law or scholarly consensus has yet emerged on civil liability arising from AI use. Japanese tort and contract law, however, have a deep body of case law and scholarly analysis. These provide a solid basis for assessing liability in the AI context as well.
Elements of Tort Liability
Tort liability under Japanese law requires proof of four elements (Civil Code (Minpō), art. 709):
(i) an act infringing on a right or legally protected interest;
(ii) intent or negligence with respect to (i);
(iii) damages; and
(iv) a causal link between the infringing act and the damages.
Under Japanese law, negligence is established by identifying and proving a specific duty of care and showing that the defendant breached it. A bare assertion that "there was negligence" is insufficient. The plaintiff bears the burden of proving each of these elements with specificity.
Negligence Is Judged Without Hindsight
The key question in a negligence analysis is what the defendant should have done at the time, given the circumstances as they then existed. The question is not what, in hindsight, would have been the best course of action.
This point matters greatly in the AI context. Technology and business conditions evolve rapidly. A party is not found negligent simply because the action later proves inadequate.
Breach of Contract
Breach-of-contract claims similarly require specificity. The obligee must identify the precise obligation owed and prove that the obligor failed to perform it. A vague assertion such as "a duty not to cause harm to the counterparty" is not enough. The claim must be framed in concrete terms—for example, "a duty to take X measures when using AI."[vi]
The Draft Guidance on the Assistance/Support Category
AI User Liability
The Draft Guidance states that most current AI systems fall into the assistance/support category and addresses the user's tort liability as follows.[vii]
AI users are expected to evaluate the accuracy and appropriateness of AI outputs, treating them as aids to—not substitutes for—their own judgment (Scenario 1: delivery-route optimization AI; Scenario 2: legal research support AI).[viii]
Where the risks are not readily apparent and the output is not easy to verify or correct, users may need to gather information in advance and take appropriate measures to prevent harm to third parties (Scenario 3: image-generation AI; Scenario 4: transaction-vetting AI).[ix]
AI Developer/Provider Liability
The Draft Guidance addresses the developer/provider's tort liability along two axes: causation and negligence.
On causation, the Draft Guidance reasons that the user is expected to verify and, if necessary, correct AI outputs. Causation between the developer/provider's conduct and third-party harm is therefore generally negated, and developer/provider liability to third parties is limited.[x]
On negligence, the Draft Guidance takes the position that clear disclosure of the AI system's limitations, proper use, and material risks will generally negate a finding of negligence against the developer/provider.[xi]
The Draft Guidance recognizes, however, that where the user cannot easily foresee specific risks or control outputs, the developer/provider may need to take certain measures to prevent rights infringement (Scenario 3: image-generation AI; Scenario 4: transaction-vetting AI).[xii]
A Closer Look at the Draft Guidance
The Draft Guidance provides a valuable and systematic framework for analyzing civil liability in the assistance/support category. Several aspects of its analysis, however, require closer examination.
4.1 The Substantive Basis for the Assistance/Support Category and the Boundary Between Categories
The Draft Guidance identifies three grounds for classifying AI use as assistance/support, each reflecting a different situation:[xiii]
(a) Given the AI's function and the context of its use, it cannot be said that the AI is making decisions in place of a human.
(b) Regulatory requirements mandate a final human judgment.
(c) The AI's output inherently carries a risk of infringing third-party rights, making human evaluation and verification necessary.
Rationale (b) is relatively clear-cut because it turns on a specific regulatory requirement. Rationales (a) and (c), however, are not mutually exclusive. A transaction-vetting AI, for example, may satisfy both (a) and (c) simultaneously.
The classification alone does not dictate what measures an AI user should take. AI users need to assess the specific risks of the AI they use and determine the appropriate measures on a case-by-case basis.
4.2 AI User Negligence: Misclassification and Duty of Care
Under the Draft Guidance's framework, AI users in the assistance/support category need to evaluate and verify AI outputs. This requirement is consistent with the nature of assistance/support AI.
In practice, however, a user may skip human verification and treat an AI that should have been used in assistance/support mode as if it were a reliance/substitution tool. Negligence in such cases requires careful analysis.
Two considerations are relevant. First, it may not be possible to determine whether the misclassification itself caused the harm. The harm might have occurred even with proper human oversight under the assistance/support approach.
Second, the choice of how to use a given AI is a holistic judgment. It takes into account the AI's capabilities, the nature of the business, and the information available at the time. If the user conducted a reasonable analysis at the time, the choice is not a breach of the duty of care simply because it later proves suboptimal.
What matters, then, is whether the user built an appropriate framework around the chosen mode of use. If the user decided to rely on AI as a reliance/substitution tool, the key question is whether the user validated the reasonableness of delegating the decision to AI and put safeguards in place against erroneous outputs.
4.3 Developer/Provider Causation: A Closer Look at the Cited Authority
The Draft Guidance addresses developer/provider liability through both causation and negligence. The causation analysis, however, deserves closer scrutiny in light of the commentary it cites.
Scope of the cited commentary
The Draft Guidance states that "where another person's judgment intervenes in relation to a rights infringement, causation has generally been considered negated on the ground that the direct cause was that other person's decision," citing a commentary.[xiv]
A closer reading of that commentary reveals a more nuanced position. The commentary distinguishes between two situations: one where a human decision intervenes in the causal chain generally, and one where an illegal decision by a third party intervenes specifically. Causation is negated in the latter scenario—where a third party, exercising free will, decides to commit an illegal act.[xv] The commentary does not support the broader proposition that any intervening human judgment breaks the causal chain.[xvi]
Scope of the cited case
The Draft Guidance cites a district court decision as authority for negating causation.[xvii] That decision, however, turned heavily on its specific facts. Japanese lower-court decisions do not have binding precedential effect, and the ruling's scope should be understood as limited.
Judicial practice
In the author's experience, courts rarely dispose of a case on causation alone without also addressing negligence. Both elements are typically contested and adjudicated. This makes it difficult to generalize from a single decision that resolved a case solely on causation grounds.
Practical implications for developers and providers
AI developers and providers should not rely too heavily on the argument that the user's "intervening decision" negates causation. The more prudent approach is to focus on the measures discussed in Section 5 that can negate negligence.
4.4 De Facto Presumption of Negligence: Very Limited Application in AI Cases
The Draft Guidance cautions that the following analysis does not apply directly to AI. It notes, however, that features common to medical malpractice and environmental litigation—such as technical complexity and asymmetric access to evidence—may also be present in AI cases. Where a serious rights infringement provides sufficient justification, a de facto presumption of negligence could apply.[xviii]
This observation requires examination from two angles.
Procedural reality
A de facto presumption of negligence comes into play, if at all, only at the final stage of trial. It does not relieve the plaintiff of the burden of identifying negligence with specificity at the outset. In medical malpractice cases, the plaintiff must first identify the alleged breach of the duty of care and present supporting medical evidence.[xix] The same procedural sequence applies in ordinary tort cases. In AI litigation, too, the plaintiff must begin by specifying the alleged negligence and presenting supporting evidence.
Doctrinal limits—very narrow application in AI cases
Even in medical malpractice and environmental cases, courts have applied a de facto presumption of negligence only in narrow circumstances. The Supreme Court decision cited in the Draft Guidance involved a case where the drug package insert clearly stated that blood pressure should be measured at two-minute intervals.[xx] Commentators have noted that the precedent's scope should be construed narrowly.[xxi]
Compared to medical practice, the AI field has no established uniform standard of care. Given the wide variety of AI systems and the many different actors involved, it is hard to envision a situation in which a court would presume negligence for a specific AI system.
The possibility of a de facto presumption cannot be categorically excluded. Its application in AI cases, however, is likely confined to highly exceptional circumstances.
Practical Steps for Managing Liability Risk
The Importance of an Ex Ante Perspective
Courts assess negligence based on what the defendant should have done at the relevant point in time. The assessment itself occurs after the fact, in litigation. But the standard against which conduct is measured is forward-looking: what should have been done given the information and circumstances available at that time.
For AI developers, providers, and users, preparation matters. The Draft Guidance itself notes that compliance with the AI Business Operator Guidelines—through risk analysis, system development, and organizational measures—may be considered favorably in a negligence assessment.[xxii]
In litigation, judges look not only at legal arguments but also at what decisions were actually made and why. AI users, developers, and providers should therefore document their decision-making processes and maintain records that demonstrate the reasonableness of their choices.
This approach aligns with AI governance principles: cross-functional collaboration among technical, legal, and business teams to conduct risk assessments, establish monitoring systems, and prepare incident-response protocols.
Under the litigation burden-of-proof structure, the plaintiff must first identify the alleged negligence and prove it. How much the defendant must actually disclose about its internal decision-making will vary by case. But the value of proactive preparation does not depend on the specific procedural posture of a future dispute.
AI User Perspective
AI users can choose how to deploy AI but may not fully understand its capabilities and limitations. A user may believe it is operating in reliance/substitution mode, only to be found later to have been objectively in the assistance/support category.
AI users should take the following steps, tailored to the nature and scale of their businesses:
First, determine which business processes will use AI and in what mode, and document the analysis for each process.
Second, record the factual assumptions underlying these decisions—including the developer/provider's descriptions of the AI's capabilities, the nature of the relevant business operations, and the anticipated risks.
Third, build a verification framework that matches the chosen mode of use. If AI is used in the assistance/support mode, the user should ensure that humans can effectively review AI outputs.
AI Developer/Provider Perspective
AI developers and providers cannot fully control how their products are used. An AI designed for assistance/support may be deployed by a user in reliance/substitution mode.
AI developers and providers should take the following steps, tailored to the nature and scale of their businesses:
First, clearly explain the AI's capabilities, limitations, and associated risks. As noted in Section 3, the Draft Guidance treats this disclosure as a key factor in the negligence analysis.
Second, plan for the possibility that users will deploy the AI in unintended ways. Practical measures include clear terms of use, technical guardrails, and output-level warnings.
Looking Ahead
This article has examined civil liability in the assistance/support category in light of the Draft Guidance. Part 2 will address the reliance/substitution category.
[i]METI, Study Group on Civil Liability in AI Use, https://www.meti.go.jp/shingikai/mono_info_service/ai_utilization_civil/index.html (last visited Mar. 8, 2026).
[ii]e-Gov Public Comment, Public Comment on the "Draft Guidance on the Interpretation and Application of Civil Liability in AI Use," https://public-comment.e-gov.go.jp/pcm/detail?CLASSNAME=PCMMSTDETAIL&id=595226004&Mode=0 (last visited Mar. 8, 2026).
[iii]METI, Draft Guidance on the Interpretation and Application of Civil Liability in AI Use [Version 1.0], at 11, https://public-comment.e-gov.go.jp/pcm/download?seqNo=0000307821 (last visited Mar. 8, 2026).
[iv]Id. at 69.
[v]The Draft Guidance also addresses product liability, which is primarily relevant to the reliance/substitution category and will be discussed in Part 2.
[vi]The Draft Guidance focuses on tort liability. See supra note 3, at 4. This article follows the same approach, though the requirement of specificity applies equally to breach-of-contract claims.
[vii]Id. at 11, 20.
[viii]Id. at 12, 20.
[ix]Id. at 12, 20.
[x]Id. at 12–13.
[xi]Id. at 13.
[xii]Id. at 13.
[xiii]Id. at 11.
[xiv]Id. at 12.
[xv]Atsumi Kubota ed., Shin Chūshaku Minpō (15) Saiken (8) [New Commentary on Civil Law, Vol. 15, Obligations, Part 8] 387–88 (Yoshiyuki Hashimoto) (Yūhikaku, 2d ed. 2024).
[xvi]See, e.g., Yokota, The Current State of Bullying-Suicide Litigation Involving Students—Focusing on Causation, 1358 Hanrei Taimuzu 4 (analyzing cases where courts found causation between bullying and the victim's death despite the intervening act of the victim's own decision to take their life).
[xvii]Supra note 3, at 12–13; Fukushima District Court, Judgment, Dec. 4, 2018, 2411 Hanji 78.
[xviii]Supra note 3, at 72–76.
[xix]Hayato Yamakawa, Efforts Toward Expeditious and Planned Adjudication in Medical Malpractice Litigation, 1520 Hanrei Taimuzu 14, 15.
[xx]Supreme Court, Second Petty Bench, Judgment, Jan. 23, 1996, 50 Minshū 1 (No. 1). Under this precedent, where a physician fails to follow the instructions in a drug package insert and harm results, negligence is presumed. This presumption is based on departure from a clear, established standard, not on an inference of negligence from the nature of the accident itself. The author's point is that no comparable uniform standard exists in the AI field, making such a presumption difficult to apply.
[xxi]Shinichi Oshima, The Current State and Future of Medical Malpractice Litigation—The Reach of Supreme Court Precedent, 1401 Hanrei Taimuzu 5, 16–17.
[xxii]Supra note 3, at 10–11. The AI Business Operator Guidelines are voluntary guidelines issued by the Japanese government setting out expected conduct for AI developers, providers, and users. The Draft Guidance also sets out the measures expected of each party in each hypothetical scenario. See id. at 35, 47.
Member
PROFILE
