Capacity's Approach to AI

At Capacity, we are committed to helping teams do their best work. Since our founding in 2017, we have focused on using artificial intelligence (AI) to empower people to work more efficiently and better serve customers. As the AI field rapidly evolves, Capacity will remain grounded in our company values and stay true to our established guiding principles for the use of AI. 

Capacity is proud to be AI-native. From day one, we built our platform using AI, rather than adapting to dated technology. 

We think it is important to define some terms before we explain how we use AI in the Capacity platform.

  • Large language models (LLMs): large models that make language predictions based on training done on very large data sets of collected text.  LLMs can predict each word in a translation from English to French or the best words to summarize a recording from a meeting.Artificial Intelligence (AI)is a broad term that encompasses various techniques, algorithms, and approaches used to enable software to perform tasks that typically require human intelligence. AI used by Capacity includes:
    • Machine learning (ML): techniques for teaching software to improve its performance on specific tasks based on experience, without explicit programming.
    • Natural language processing (NLP): the ability to understand and generate human language, allowing AI systems to interact with users, analyze text data, or translate between languages.  It is primarily concerned with processing and manipulating text or speech data in a way that enables computers to understand and generate human language.
    • Natural language understanding (NLU): is a subset of NLP which as  the ability to match against the entire context of words, phrases, paragraphs and documents to determine what data should be used in a response.  NLU delves deeper into understanding the meaning and context behind the language, enabling more sophisticated interactions between AI systems and users.
  • Generative AI or generative pre-trained models (GPT): a subset of AI that refers to a group of smart algorithms that creates new, human-like outputs based on learned patterns and structures from input data, including LLMs as well as fine-tuned training-specific tasks.

Capacity uses various AI tools and models to power our platform. Since inception, we've used a mixture of open source and in-house NLP models that allows our chatbot to understand the intent of incoming requests and messages to provide more relevant answers. Capacity continuously improves and grows customer knowledge bases with state-of-the-art, built-in ML feedback systems.  Capacity has begun to incorporate NLU techniques to match against the entire context of words, phrases, paragraphs and documents to determine what data should be used in a response, and continues to use some parts of our NLP technology to enhance the success of our new NLU-based approach.  Capacity has reviewed these tools and models to confirm that they perform as expected and all of them are hosted by us (in other words, Customer Data is not processed outside of the Capacity AWS environment in utilizing these ML, NLP and NLU models). 

Beginning in 2023, we have integrated Generative AI power into our Capacity platform if our customer opts to turn these tools on (See AI Tools in the Capacity Platform on our Support site for details). LLMs are used in the Capacity platform to provide a better understanding of language in queries and to generate human-like text in the desired tone (casual or formal), such as in a draft response to a customer’s end-user or in draft KnowledgeBase Q&A pairs.

Capacity has two types of LLMs available for use in our platform. Each customer opting to enable our AI features can choose between these two approaches.

  1. INTERNAL (Capacity AI) –Capacity utilizes several refined, self-hosted large language models (LLM), each of which are based on a pre-trained open-source models.  We host our LLMs on our own dedicated AWS instance.  We believe that our self-hosted LLMs will be an attractive alternative for our customers who would prefer not to use OpenAI’s LLM, such as those in the healthcare sector.  The LLMs are hosted in the Capacity platform, which adheres to SOC 2 Type 2 compliance standards.
  2. EXTERNAL– Capacity licenses OpenAI’s GPT-3.5 Turbo and GPT-4 Turbo models, which offer enterprise clients like Capacity the ability to leverage these advanced large language models (LLMs). These models are accessible through OpenAI’s API, which supports up to 128,000 tokens. We find that these powerful LLMs are particularly well-suited for scenarios that demand robust language processing capabilities, and they are available to customers who opt for this solution. Here are details on our license with OpenAI:
    • OpenAI's API license ensures that any data, inputs, or outputs generated by Capacity and its customers are not utilized for the purpose of training OpenAI's models. Any fine-tuning and tailoring of these models to meet specific Capacity requirements are not distributed to other OpenAI clients or incorporated into the training of additional models. 
    • We have signed a BAA with OpenAI pursuant to which OpenAI may create, receive, maintain or transmit PHI from Capacity customers. Our OpenAI access is through its “Zero Data Retention” transmission and processing solution that is designed to avoid retention of any PHI.
    • The OpenAI API Platform adheres to SOC 2 Type 2 compliance standards.

The foundations of Capacity’s approach to Generative AI are as follows:

  • We do not utilize OpenAI’s LLM unless a customer specifically authorizes it
  • We do not sell or make available customer data to train LLMs
  • We do not train our internal LLM on customer data unless the customer requests that we do so in order to better train the model for its usage
  • We are compliant with HIPAA, GDPR, GLBA and other applicable data privacy regulations 
  • We have received a SOC 2, Type 2 report from an outside auditor
  • We conduct security awareness training for our team members on a regular basis
  • We perform an IT Risk assessment evaluating the risk of technology and security on an annual basis
  • We have created Generative AI tools in specific use cases designed to increase efficiency and accuracy in delivering answers provided to end-users and designed our platform so that that output suggested by using GPT and LLM is, by default, reviewed by our customers’ agents before being sent to end-users
  • When we incorporate LLMs to produce responses for our customers, the LLM is instructed by us to create responses which are tied to, for example, to documented guidelines or to a customer’s own data, rather than having the LLM produce responses (and potentially) hallucinate by pulling out of date or irrelevant words from the LLM generally

We welcome any questions or interest you have in AI, including GPT and LLMs.




Was this article helpful?