Launching the Agentic MSA — free, open, and designed for AI builders

min read

We’re excited to share a new addition to the GitLaw community: a Master Services Agreement (MSA) tailored for AI Agents. It’s open, free to use, and built on our latest thinking about how contracts should adapt to this fast-moving area of tech law.

"Most AI agent companies are still using SaaS contract language which makes no sense. Your software isn't just sitting there helping someone fill out a form. It's booking meetings, writing code, making decisions. When something goes wrong, who's liable? The standard contracts don't answer that. We kept hearing this from founders, so when GitLaw said they were building an agent-specific MSA, we jumped in. Builders need legal frameworks that match what their agents actually do".

Manny Medina, CEO of Paid

Disclaimer: Because the law around AI agents is new and evolving, this template should be treated as a starting point, not a substitute for legal advice. Provisions may shift over time, and details can become outdated. Use it at your own risk, and always consult a qualified lawyer before relying on it in practice.

👉 Request from the community: If you have suggestions on how to improve this MSA please get in touch. We’re keen to integrate thinking from the community.

This MSA uses CommonPaper’s excellent Software Licensing Agreement and AI Addendum and we’ve adapted parts of the agreement and cover page to make them easier and more relevant for builders of AI agents.

You can:

  • Access it directly in the GitLaw Community
  • Ask the GitLaw AI Agent for an Agent MSA, and it will generate and customise one for you based on this template. 

When drafting or negotiating contracts for AI agents, there are several legal and commercial issues you’ll want to think through. 

1. Agent Classification and Decision Responsibility

The most fundamental consideration in AI agent contracts is clarifying the role and responsibility boundaries of the AI agent. Unlike traditional software that merely processes data, AI agents can appear to make autonomous decisions, creating ambiguity about who bears responsibility for outcomes. Your contract must explicitly establish that the AI agent functions as a sophisticated tool that assists and augments human decision-making rather than replacing human judgment entirely. This means the customer retains full responsibility for all business decisions, including those informed by or implemented through agent recommendations. Clear language around human oversight requirements and final decision authority protects both parties from liability disputes when agents produce unexpected or suboptimal results.

👉 Template Reference: This is addressed in Section 1.2 of the AI Addendum through "Agent Classification and Decision Responsibility" language.

2. Liability Limitations and Risk Allocation

Given the unpredictable nature of AI systems, establishing appropriate liability caps and risk allocation is crucial for commercial viability. Most AI agent contracts should cap damages to fees paid in the preceding 12 months and exclude indirect damages like lost profits, business interruption, or consequential damages. However, the challenge lies in determining which liabilities can and cannot be disclaimed. While IP infringement, confidentiality breaches, and willful misconduct typically cannot be excluded, the treatment of AI-specific risks like "hallucinations," bias, or discriminatory outputs requires careful consideration. Your contract should include explicit disclaimers stating that agent outputs may be inaccurate, incomplete, biased, or require human verification before use in material business decisions.

👉 Template Reference: This is covered in Section 7 (Limitation of Liability) and enhanced disclaimers in Section 4.1 of the AI Addendum regarding the "Nature of AI."

3. Data Ownership and Training Rights

Data relationships in AI agent contracts are more complex than traditional software because agents both consume customer data as input and generate new data as output, while potentially using customer interactions to improve underlying models. Your contract must clearly establish that customers retain ownership of both their input data and any output generated by the agent. Equally important is specifying whether and how you may use customer data for model training purposes. Many customers will resist allowing their proprietary data to be used for training models that benefit other users. Consider offering tiered options where customers can opt out of training data usage, potentially at a higher price point, or implement restrictions requiring aggregation and de-identification before any training use.

👉 Template Reference: This is addressed in Section 2.1 (Ownership) of the AI Addendum and the Training Data variables in the Cover Page allowing customization of training permissions.

4. Third-Party API Dependencies and Restrictions

Most AI agents rely heavily on third-party services like OpenAI, Anthropic, or specialized APIs, creating a web of dependencies that can impact service availability and introduce additional restrictions. Your contract should explicitly disclaim liability for third-party service outages, pricing changes, or policy modifications that affect your agent's functionality. Additionally, many third-party AI providers impose usage restrictions that may need to flow through to your customers, particularly for highly regulated activities like medical advice and financial counseling. Consider whether to incorporate these restrictions directly into your terms or maintain flexibility to update restrictions as third-party policies evolve.

👉 Template Reference: This should be added as a new section (suggested as "Third-Party API Dependencies" in Section 3 of the AI Addendum).

5. Mutual Indemnification Framework

AI agents create unique indemnification scenarios that require careful allocation of risk between parties. Customers should indemnify you for their misuse of the agent, including using it for unlawful purposes, failing to maintain required human oversight, or deploying it in regulated contexts without proper authorization. Conversely, you should indemnify customers for intellectual property infringement claims arising from agent outputs when used according to the agreement terms. However, this indemnity should include appropriate carve-outs for customer modifications to outputs, use in combination with third-party systems, or situations where the customer knew or should have known the output might infringe third-party rights.

👉 Template Reference: This is covered in Section 8 (Indemnification) of the main agreement, with AI-specific covered claims definable in the AI Addendum variables section.

6. Flexible Pricing Model Architecture

AI agents enable diverse pricing models that align with different customer value propositions and usage patterns. Your contract should accommodate multiple pricing structures: per-agent pricing (FTE replacement model) for customers seeking headcount substitution, per-action pricing (consumption model) for variable usage patterns, per-workflow pricing (process automation model) for defined business processes, and per-outcome pricing (results-based model) for performance-driven arrangements. The key is building flexibility into your template so you can easily adapt pricing to different customer segments and use cases without requiring contract renegotiation.

👉 Template Reference: This is comprehensively addressed in the "Fees" section of the Order Form with multiple pricing model options and detailed usage restrictions for each.

7. Regulatory Flexibility and Change Management

The regulatory landscape for AI is evolving rapidly, with different jurisdictions taking varying approaches to AI governance, taxation, and liability. Your contract needs built-in flexibility to adapt to regulatory changes without requiring complete renegotiation. Include change-of-law provisions that allow either party to request contract modifications if new regulations materially increase liability or compliance obligations. Consider establishing cure periods (such as 60-90 days) for renegotiation when regulatory changes occur, with termination rights if parties cannot reach agreement. Additionally, address tax allocation upfront by specifying whether AI-related taxes are borne by the customer, provider, or shared proportionally.

👉 Template Reference: This is addressed through the new "Change-of-Law/Regulatory Cure Period" provision in Section 10.17 and tax allocation options in the Payment Process section of the Order Form.

---


These considerations reflect the unique challenges of contracting for AI agents while building on established software licensing principles. The template provides a comprehensive starting point, but each implementation should be customized based on your specific technology, customer base, and risk tolerance.

Nick Holzherr
Nick Holzherr
badge check
Founder of GitLaw
October 8, 2025