AI Cluster · Procurement Process · Updated March 2026

The Enterprise GenAI Procurement Checklist: 40 Questions Before You Sign

Enterprise AI contracts signed without this checklist routinely contain IP concessions, data rights gaps, and commercial commitments that cost significantly more than the savings the AI was meant to deliver.

By Former AI Vendor Executives·2,400 Words·March 2026

Enterprise AI procurement moves fast. Vendors are under pressure to close, business units are eager to deploy, and procurement teams are operating frameworks designed for traditional software that do not translate cleanly to AI's unique commercial risks. The result is a generation of enterprise AI agreements signed at speed with inadequate diligence.

This checklist structures the diligence process. Forty questions across eight domains — strategy, vendor assessment, pricing, contracts, security, compliance, governance, and exit planning — ensure that no material risk area is overlooked before signature. This guide works as a companion to our Enterprise AI Procurement Guide and our AI contract clauses guide.

How to Use This Checklist: Work through each section before the relevant procurement stage. Strategy questions belong in the business case stage. Vendor assessment questions drive your RFP and evaluation. Pricing and contract questions belong in the commercial negotiation stage. Security, compliance, and governance questions should be confirmed in the contract and delivery planning stage before execution. Exit questions should be answered before any contract is signed.

Section 1: Strategic Questions (1–6)

01
What specific business outcome does this AI investment deliver? Define the measurable outcome — cost reduction, revenue increase, productivity improvement — with a baseline and a target. AI investments without specific outcome metrics routinely fail to justify their cost.
02
Have we conducted a build vs. buy analysis? For many use cases, open-source models (Llama 3, Mistral) deployed on your own infrastructure deliver adequate quality at a fraction of the cost of frontier model APIs. Cloud AI services add operational overhead; self-hosted models add engineering overhead. Compare both seriously before committing to a vendor API.
03
Does this use case require frontier model capabilities? GPT-4o mini, Claude 3 Haiku, and Gemini Flash offer 90–95% of the quality of frontier models at 5–10% of the cost for most enterprise use cases. Document specifically why your use case requires frontier model capabilities before committing to frontier model pricing.
04
How does this AI investment fit within our existing cloud and software commitments? AI services accessed through your existing Azure, AWS, or Google Cloud committed-use frameworks apply toward contract commitments. Misaligned AI procurement can create parallel spend that undermines cloud commitment attainment.
05
Have we assessed the full total cost of ownership — not just API charges? AI total cost includes API charges, infrastructure (vector databases, orchestration, caching), integration development, security tooling, ongoing model monitoring, human review for high-stakes outputs, and eventual migration costs. API charges typically represent 40–60% of total AI deployment cost. See our AI total cost guide.
06
What is our AI governance framework and who owns AI procurement decisions? AI procurement decisions made by individual business units without central oversight routinely create compliance exposure, duplicated costs, and incompatible vendor relationships. Establish central AI governance ownership before signing vendor agreements.

Section 2: Vendor Assessment (7–14)

07
Have we obtained comparable proposals from at least three AI vendors? The minimum competitive set for any enterprise AI procurement should include a direct foundation model provider, a cloud AI service provider, and at minimum one alternative. Evaluating fewer options eliminates the commercial leverage that drives pricing concessions.
08
Have we benchmarked the vendor's model quality on our specific use case? Published benchmarks (MMLU, HumanEval, etc.) do not predict performance on enterprise-specific tasks. Conduct proprietary evaluations on representative samples of your actual workload before selecting a vendor. Quality differences between models are use-case specific and cannot be inferred from general benchmarks.
09
What is the vendor's financial stability and counterparty risk profile? Several AI vendors are venture-backed growth-stage companies that may not exist in their current form in five years. Multi-year enterprise commitments to financially fragile counterparties carry existential risk. Assess vendor financial health, funding runway, and M&A exposure before committing to long-term agreements.
10
Does the vendor have enterprise references in our industry? Request references from enterprises of comparable size and industry — specifically for production deployments at scale, not pilots. AI vendors have large pilot customer bases and small production deployment bases. The relevant reference is production at enterprise scale, not pilot success.
11
What is the vendor's model release cadence and deprecation history? Review how frequently the vendor releases new models, how quickly they deprecate older models, and what notice they have provided to enterprise customers for past deprecations. A vendor with a history of rapid model changes and short deprecation windows creates operational risk for production deployments.
12
Does the vendor hold AI vendor certifications or commercial relationships that create conflicts of interest? Avoid advisory firms — and evaluate procurement platforms — that hold vendor certifications or commercial relationships with AI providers, as these create incentives to recommend specific vendors regardless of your commercial interests.
13
What are the vendor's contractual obligations if their AI system produces harmful or incorrect outputs in your deployment? AI models produce incorrect outputs with regularity. Understand the vendor's liability position for AI-generated errors before deploying in high-stakes use cases. Most AI vendor agreements exclude liability for AI output quality entirely.
14
Have we assessed the vendor's security posture and certifications? Confirm SOC 2 Type II, ISO 27001, and applicable industry certifications (HIPAA, FedRAMP, PCI-DSS) before deployment. Request the most recent security audit reports rather than certification summaries.

Section 3: Pricing and Commercial Terms (15–20)

15
Have we modelled AI costs from pilot data — not projections? Commit to enterprise AI volumes only after measuring actual consumption from a meaningful pilot. See our AI usage pricing guide for the five modelling errors that cause cost projections to fail and the framework for building accurate models.
16
What is our input/output token ratio for each production use case? Output tokens cost 3–5× more than input tokens. Failure to model output volume accurately is the most common cause of AI budget overruns. Measure this specifically from your pilot workload.
17
Are we paying list price, or have we negotiated enterprise rates? Enterprise AI pricing at 25–40% below list is available at meaningful commitment volumes. Any enterprise buyer paying list price for AI at $500K+ annual spend should be negotiating. See our OpenAI Enterprise pricing benchmarks.
18
What happens if we overspend or underspend our committed volume? Volume commitments carry risk in both directions. Overspend triggers overage charges at list rates (typically 2–4× your committed rate). Underspend forfeits committed-but-unspent amounts. The contract must address both scenarios with reasonable financial exposure limits.
19
Does the agreement include price protection for the duration of our commitment? AI pricing is declining as infrastructure costs fall. An agreement that allows the vendor to raise prices during your commitment period eliminates the economics of your commitment. Insist on price lock for the committed term.
20
Have we identified all eligible batch API workloads? Batch processing APIs are priced at 50% of real-time API rates for all major providers. Identify and route all non-time-sensitive workloads to batch APIs before negotiating committed volumes — this directly reduces the volume you need to commit to.

Section 4: Contract Provisions (21–27)

See our AI Contract Clauses guide for detailed language on all 20 provisions referenced below.

21
Does the contract explicitly assign IP ownership in all AI outputs to us? Ownership must cover copyright, database rights, and all other applicable IP rights without reservation. "You may use outputs" is insufficient — you need unconditional assignment.
22
Is there an explicit prohibition on the vendor using our data for model training? This must be an affirmative contractual prohibition with specific scope covering inputs, outputs, prompts, and usage metadata — not merely an opt-out mechanism.
23
Are model stability, change notification, and legacy access commitments included? Production AI deployments require minimum 90 days' notice of material model changes and access to prior model versions for the notification period.
24
Does the SLA meet enterprise production requirements? Standard API SLAs of 99.5% uptime allow 43+ hours of annual downtime. Enterprise production deployments require 99.95% uptime, latency SLAs, and rate limit guarantees.
25
Are audit rights over billing and usage data included? AI billing errors are common. Audit rights covering a 24-month lookback period, exercisable within 30 days of request, are a baseline commercial protection.
26
Does the contract include termination for convenience rights? AI technology evolves faster than typical enterprise contract cycles. Termination for convenience rights after an initial period protect against being locked into commercially or technically obsolete commitments.
27
Are data export and transition assistance rights explicitly included? Complete data portability at termination — including fine-tuned model parameters, training datasets, and usage logs — with 90–180 days of transition assistance is a baseline exit requirement.

Section 5: Security and Data Protection (28–32)

28
Have we classified the data types that will flow through the AI system? Determine whether the deployment involves personal data (triggering GDPR/privacy law requirements), regulated data (HIPAA, PCI-DSS, financial regulation), confidential intellectual property, or other sensitive categories that require specific contractual treatment.
29
Is the vendor's data processing agreement adequate for our regulatory obligations? Standard DPA templates provided by AI vendors are rarely adequate for enterprise regulatory requirements. Require negotiation of the DPA, not just signature of the vendor's standard template.
30
Does the deployment require data residency restrictions? EU-based organisations, regulated industries, and public sector organisations often require data to be processed and stored in specific jurisdictions. Confirm that the vendor's infrastructure can satisfy your data residency requirements — not just their standard region availability.
31
What is the vendor's breach notification obligation and timeline? Enterprise contracts should require breach notification within 72 hours of vendor discovery — consistent with GDPR obligations — rather than the 30-day windows common in standard vendor agreements.
32
Have we assessed prompt injection and AI-specific security risks for this deployment? Production AI systems face attack vectors specific to AI — prompt injection, model inversion, data extraction through carefully crafted queries. Confirm that your security assessment covers AI-specific risks alongside standard application security.

Section 6: EU AI Act Compliance (33–35)

33
Have we classified our AI deployment under the EU AI Act risk tiers? High-risk AI systems — including applications in employment, credit, healthcare, and critical infrastructure — require conformity assessments, human oversight mechanisms, and EU database registration. Confirm your deployment's classification before proceeding.
34
Does the vendor contract include obligations to support EU AI Act compliance? Your AI Act compliance obligations as a deployer depend on vendor cooperation. Contracts must address vendor documentation obligations, audit cooperation, and notification of changes that affect risk classification. See our EU AI Act contracts guide.
35
Have we established human oversight mechanisms for high-risk AI outputs? High-risk AI system deployments require documented human oversight processes that are meaningful, not nominal. A "human in the loop" checkbox does not satisfy the EU AI Act's oversight requirements if the human reviewer lacks the time, information, or authority to meaningfully review AI decisions.

Section 7: Governance and Ongoing Management (36–38)

36
Do we have a process for monitoring AI model quality in production? AI model quality degrades over time as production data distribution shifts from the model's training distribution. Implement automated quality monitoring with defined degradation thresholds that trigger model review or retraining.
37
Is there a budget owner and approval process for AI spend exceeding committed volumes? Token-based AI billing creates the potential for unconstrained spend growth that is invisible until the invoice arrives. Implement real-time spend monitoring and automated alerts at 75% and 90% of committed volume to prevent unexpected overspend.
38
Have we documented our AI procurement decisions for audit purposes? Regulatory scrutiny of AI procurement is increasing. Document vendor selection rationale, risk assessments, contractual protections negotiated, and ongoing monitoring processes to support future audits.

Section 8: Exit Planning (39–40)

39
Have we designed our AI architecture to preserve provider optionality? Tight coupling to a specific vendor's API and proprietary features creates switching costs that increase over time. Abstraction layer architectures, standardised prompt formats, and portable fine-tuning datasets preserve the ability to migrate without a full system rewrite. See our AI vendor lock-in guide.
40
Have we modelled the cost and timeline of migrating away from this vendor? Before signing, model the realistic cost and timeline of migrating to an alternative provider at contract expiry. If that cost is prohibitive, you are accepting lock-in as a commercial condition. Negotiate accordingly — either for better exit terms or for price protection that compensates for the lock-in you are accepting.

This checklist provides a structured foundation for enterprise AI procurement diligence. Advisory firms like Redress Compliance apply this framework and more — drawing on direct commercial experience with all major AI vendors — to guide enterprise buyers through procurement from initial strategy through contract execution and ongoing governance.

Related articles in the AI procurement series: AI Procurement Guide, AI Contract Clauses, OpenAI Enterprise Pricing, AI Usage Pricing, and our AI Procurement Advisory service. For SaaS procurement context, see our SaaS License Management Guide.

The Licensing Edge

Weekly AI procurement intelligence: checklist updates, vendor changes, and procurement tactics for enterprise buyers.

Complete AI Procurement Diligence With Expert Guidance

Our AI practice has guided 80+ enterprise AI procurements — from initial strategy through contract execution. We know which checklist items vendors will push back on and how to win those negotiations.

Before you go — get the full playbook free.

Join 4,200+ licensing executives. Unsubscribe any time.