Capacity's Approach to AI

Revised and effective: August 10, 2025

At Capacity, we are committed to helping teams do their best work. Since our founding in 2017, we have focused on using artificial intelligence (AI) to empower people to work more efficiently and better serve customers. As the AI field rapidly evolves, Capacity will remain grounded in our company values and stay true to our established guiding principles for the use of AI. 

Capacity is proud to be AI-native. From day one, we built our platform using AI, rather than adapting to dated technology. 

We think it is important to define some terms before we explain how we use AI in the Capacity platform.

  • Artificial Intelligence (AI)is a broad term that encompasses various techniques, algorithms, and approaches used to enable software to perform tasks that typically require human intelligence. AI used by Capacity includes:
    • Machine learning (ML): techniques for teaching software to improve its performance on specific tasks based on experience, without explicit programming.
    • Natural language processing (NLP): the ability to understand and generate human language, allowing AI systems to interact with users, analyze text data, or translate between languages.
  • Large language models (LLMs): very large data sets of text collected from various sources. Generative AI becomes more powerful when trained on LLMs, which allows the models to learn various topics, writing styles, and patterns.  
  • Generative AI (GenAI) or generative pre-trained models (GPT): a subset of AI that refers to a group of smart algorithms that creates new, human-like outputs based on learned patterns and structures from input data, including LLMs as well as fine-tuned training-specific tasks.
  • Retrieval-augmented generation (RAG): a process applied to LLMs to make their outputs more relevant in specific contexts.  RAG allows LLMs to access and reference information outside the LLMs own training data, such as an organization’s specific knowledge base (such as in a database, through an API call to an app, or on a webpage), before generating a response and can include citations.
  • Intelligent Virtual Agents (IVA) are what we call our “agent” functionality which goes beyond simple, rule-based automation by using AI to understand context, adapt to changing situations, and make intelligent decisions based on data analysis and predefined objectives.  

Capacity uses various AI tools and models to power our platform. Since inception, we've used a mixture of open source and in-house NLP models that allows our platform (chat, SMS, email) to understand the intent of incoming requests and messages to provide more relevant answers to users. Capacity continuously improves and grows our customers’ knowledge bases with state-of-the-art, built-in ML feedback systems.  Capacity has incorporated NLU techniques to match against the entire context of words, phrases, paragraphs and documents to determine what data should be used in a response and continues to use some parts of our NLP technology to enhance the success of our more recent NLU-based approach.  Capacity has reviewed these tools and models to confirm that they perform as expected and all of them are hosted by us (in other words, Customer Data is not processed outside of Capacity’s or the customer’s private cloud tenant environment in utilizing these ML, NLP and NLU models). 

Beginning in 2023, we have integrated Generative AI power into our Capacity platform. LLMs are used in the Capacity platform to provide a better understanding of language in queries and to generate human-like text in the desired tone (e.g., casual or formal), such as in a draft response to a customer’s end-user or in draft KnowledgeBase Q&A pairs.  

The Capacity platform includes the ability for users to create powerful Intelligent Virtual Agents (IVAs) using our LLMs. Our IVAs are designed with a security-first approach, ensuring that your data is always protected. IVAs access your connected tools and data sources on a "need-to-know" basis, governed by the permissions you explicitly grant them. This is managed through our robust Role-Based Access Control (RBAC) system. Granting access to an IVA is no different than granting access to a User. This allows for a scalable, intuitive management experience that gives you confidence in how your systems and data are accessed.

Capacity has reviewed and selected the LLMs that we license after evaluating their technical capabilities against our target use cases (focusing on performance metrics like accuracy, latency and benchmark performance), their commercial terms (such as pricing, usage limits and deployment requirements) and their data privacy features and security controls. Specifically regarding data privacy and security, as applicable, we have verified the licensors’ policies and contractual commitments on training of models and data retention, examined their encryption standards and security monitoring practices, and reviewed their SOC 2 certifications. 

  • In some components of our platform, LLMs are selectable by our customer from a list of LLMs we have vetted and licensed and/or customer-provided models.  

  • We license and host LLMs on our own dedicated secure cloud instances in AWS and Microsoft Azure (or the customer’s private cloud tenant deployments if contractually agreed).

  • We also license certain enterprise models from OpenAI.  We have direct licenses with OpenAI which ensure that any data, inputs, or outputs generated by Capacity and our customers are not utilized for the purpose of training OpenAI's models. OpenAI is SOC 2 compliant and all conversations are encrypted in transit and at rest.  

Any fine-tuning and tailoring of LLMs to meet specific Capacity requirements are not incorporated into the training of models for other customers of Capacity or customers of the organizations such as OpenAI from whom we license LLMs.

The foundations of Capacity’s approach to Generative AI are as follows:

  • We review the LLMs that we license

  • We do not sell customer data to train LLMs

  • We do not train LLMs on customer data unless the customer requests that we do so specifically for that customer in order to better train the model for its usage

  • We are compliant with HIPAA, GDPR, GLBA and other applicable data privacy regulations

  • Our platform is subject to annual SOC 2, Type 2 and ISO 27001 reviews from outside auditors

  • We conduct security awareness training for our team members on a regular basis

  • We perform an IT Risk assessment evaluating the risk of technology and security on an annual basis


We welcome any questions or interest you have in AI, including GPT and LLMs.





Was this article helpful?