How do you price B2B AI-enabled technology? As I talk to leaders at technology firms whose customer-facing solutions include AI, they all ask similar questions: How should pricing change with AI? Should it be usage-based or outcome-based? Can per-user or account-based pricing metrics still work? How do I align my pricing with value when the impact on customers is significant, usage is hard to estimate, and the solution may result in fewer users in the long run?

Spending cycles on customer value and your pricing model now is essential. Pricing is the place where the value exchange occurs between your company and the customer.  With a true innovation, customers often don’t yet understand the economic value, and projected usage can be difficult to estimate.  Focusing on pricing now could help you avoid revenue disruption, and if you get it right, accelerate the adoption of your solution.

 

What pricing strategy problem are you trying to solve? 

Typically, answers fall into all or some of the following categories: You want to…

  • Gain adoption and reduce barriers to entry/acquisition for the customer
  • Be paid more when customers use more or get more value from the solution
  • Protect yourself from cost escalation and maintain profitability
  • Encourage people to use/adopt your solution once purchased
  • Limit the downward spiral of revenue as # of users is reduced by efficiency
  • Win customers who might otherwise build their own solution 

Which of these you prioritize depends on your particular context.  But it is critical to clarify desired outcomes.  “If you don’t know where you are going, you’ll end up someplace else.” – Yogi Berra 

Caveat – pricing strategy isn’t the only factor that drives these outcomes. Product performance and sales effectiveness are prerequisites. But pricing can be a significant barrier or an enabler. 

 

What are your primary use cases? How do customers get value?

A big challenge for innovative SaaS and technology companies leveraging LLMs is the uncertainty around customer usage patterns and levels. In the early days, it will be hard to estimate or predict usage. Despite this potential barrier to adoption, you can build a pricing model that is transparent, encourages usage, and makes sense to customers. 

Many SaaS companies today use metrics such as per-user, per-query, per-account, or fixed fee per capability tier. But many increasingly have usage-based pricing structures. Software powered by LLMs often needs a combination of these things to mirror customer value, keep it simple, and yet still manage the risk of runaway costs or the customer shock of cost surprises at the end of a month, quarter, or year. 

 

A few examples to consider:

Per-User: AI-Powered Legal Research and Document Drafting (Harvey is a good example)

  • Current pricing structure: Fixed monthly fee based on # of users 
  • Why this might work short-term but not long-term: In the short term, the fee is per user, and initially, customers will add seats as they adopt.  But the business value impact is faster work completion with fewer staff.  So, in the long term, they will need to shift to a model or add elements that align better with usage (documents reviewed) or outcomes (contract rewrites) rather than with just users, or their revenue from existing customers will shrink over time. In this case, a user-based pricing metric might also be limiting adoption.

 

Usage-based & outcome-based: AI Customer Support Agent (Decagon is a good example)

  • Current pricing structure: A usage-based option (per conversation) and an outcome-based option (resolved conversation)
  • How it works: Usage-based – if the AI handles 5,000 conversations, you are billed for 5,000 conversations. This is predictable as companies know their current customer support numbers, so they can safely estimate.  (There is likely a different calculation for voice versus chat.) 
  • How it works: Outcome (resolution)-based – if the AI resolves 2,000 conversations completely, without human intervention, then the customer pays for those.  This is trickier as the resolution criteria have to be defined, and some calls may fall into a “gray zone”.  This is also harder to estimate and may vary more in peak/stress times. But the benefit of this is that the customer doesn’t pay for it unless it is resolved.  The price point should be significantly higher per call resolved, versus per conversation. 

The bottom line: the usage- and outcome-based pricing models here are transparent and work for both the vendor and the customer.  It aligns with customer usage and value, and as they use more, I’d assume they also benefit from a volume discount. But the vendor earns more revenue overall as they continue to improve their solution and the outcomes for their customer as well as for their customers’ customers! 

Usage variables, or outcome-based metrics, if done right, can help align your pricing model with customer value and your cost structure. Your overall metrics and structure must be driven by the value your customers receive, and they must make sense to customers and be perceived as fair. So as you design and then evolve your pricing model, keep these questions at the forefront: 

  • How do customers use and get value from your solution?
  • What activities drive variable costs? Does that align with how customers get value?
  • Does your proposed model restrict or encourage trial, use, and expansion?  
  • As your platform or portfolio evolves, does the model still apply?
  • What will this model do to your revenue in 12 months? 24 months?

Put yourself in your customers’ shoes. They’re pretty familiar at this point with usage-based models, so if you can balance estimation and predictability with usage factors that align with value, they’ll get it. They need assurance that their costs won’t skyrocket without warning. 

How does AI drive your costs?

Most LLMs use a token-based model with different pricing for inputs and outputs, and offer tiers based on complexity and speed.  MetaCTO does a thorough job of explaining Anthropic API pricing and packaging model (Price per $1M tokens) and covers the three levels of Claude 4.5 API offerings: Haiku 4.5, Sonnet 4.5, and Opus 4.5.  Anthropic offers three distinct flavors that provide different capabilities in intelligence, managing complexity, volume, speed, and efficiency, each with a different cost and value profile.  Which one you choose depends heavily on the use cases you apply it to and on how customers use your solution and drive consumption of the LLM inputs.  For more details on the variables to consider and why you might choose one level over the other for your application, check out MetaCTO’s detailed explanation

Being clear about what you and your customers need in terms of outcomes and performance, and understanding how customer usage will drive these costs, is a critical input to your pricing model decisions and ability to project profit.  Scenario planning with these costs in mind will help you decide which usage variables you can implement to provide cost guardrails and transparency for customers. 

Pricing strategy today versus tomorrow

Whatever you put in place today doesn’t have to work perfectly a year from now.  Things are changing quickly.  Your pricing should evolve as you expand your solution, your use cases, and your customer segments.  But if you can get the basic pricing building blocks right now, aligning the pricing model with both value and costs, you can ensure customers try your solution, get value from it, and expand usage while you capture a fair share of that value.