Capacity's Approach to AI

At Capacity, we are committed to helping teams do their best work. Since our founding in 2017, we have focused on using artificial intelligence (AI) to empower people to work more efficiently and better serve customers. As the AI field rapidly evolves, Capacity will remain grounded in our company values and stay true to our established guiding principles for the use of AI. 

Capacity is proud to be AI-native. From day one, we built our platform using AI, rather than adapting to dated technology. 

We think it is important to define some terms before we explain how we use AI in the Capacity platform.

  • Artificial Intelligence (AI)is a broad term that encompasses various techniques, algorithms, and approaches used to enable software to perform tasks that typically require human intelligence. AI used by Capacity includes:
    • Machine learning (ML): techniques for teaching software to improve its performance on specific tasks based on experience, without explicit programming.
    • Natural language processing (NLP): the ability to understand and generate human language, allowing AI systems to interact with users, analyze text data, or translate between languages.
  • Large language models (LLMs): large models that make language predictions based on training done on very large data sets of collected text.  LLMs can predict each word in a translation from English to French or the best words to summarize a recording from a meeting.
  • Generative AI or generative pre-trained models (GPT): a subset of AI that refers to a group of smart algorithms that creates new, human-like outputs based on learned patterns and structures from input data, including LLMs as well as fine-tuned training-specific tasks.

Capacity uses various AI tools and models to power our platform. Since inception, we've used a mixture of open source and in-house NLP models that allows our chatbot to understand the intent of incoming requests and messages to provide more relevant answers. Capacity continuously improves and grows customer knowledge bases with state-of-the-art, built-in ML feedback systems. Capacity has reviewed these tools and models to confirm that they perform as expected and all of them are hosted by us (in other words, Customer Data is not processed outside of the Capacity AWS environment in utilizing these ML and NLP models). 

Beginning in 2023, we have integrated Generative AI power into our Capacity platform if our customer opts to turn these tools on (See AI Tools in the Capacity Platform on our Support site for details). LLMs are used in the Capacity platform to provide a better understanding of language in queries and to generate human-like text in the desired tone (casual or formal), such as in a draft response to a customer’s end-user or in draft KnowledgeBase Q&A pairs.

Capacity has two LLMs available for use in our platform. Each customer opting to enable our AI features can choose between these two models.

  1. INTERNAL (Capacity AI)  – Capacity offers a refined, self-hosted large language model (LLM) based on a pre-trained open-source model, supporting up to 32,768 tokens.  We host our LLM on our own dedicated AWS instance.  We believe that it will be an attractive alternative for our customers who would prefer not to use OpenAI’s LLM, such as those in the healthcare sector.  Here are specifics about the Capacity LLM:
    • The makers of this LLM ensure it is trained on clean, well-curated datasets, which reduces the risk of the model ingesting and replicating any harmful, biased, or inappropriate content.
    • The LLM is hosted in the Capacity platform, which adheres to SOC 2 Type 2 compliance standards.
  2. EXTERNAL– Capacity licenses OpenAI’s GPT-3.5 Turbo and GPT-4 Turbo models, which offer enterprise clients like Capacity the ability to leverage these advanced large language models (LLMs). These models are accessible through OpenAI’s API, which supports up to 128,000 tokens. We find that these powerful LLMs are particularly well-suited for scenarios that demand robust language processing capabilities, and they are available to customers who opt for this solution. Here are details on our license with OpenAI:
    • OpenAI's API license ensures that any data, inputs, or outputs generated by Capacity and its customers are not utilized for the purpose of training OpenAI's models. Any fine-tuning and tailoring of these models to meet specific Capacity requirements are not distributed to other OpenAI clients or incorporated into the training of additional models. 
    • We have signed a BAA with OpenAI pursuant to which OpenAI may create, receive, maintain or transmit PHI from Capacity customers. Our OpenAI access is through its “Zero Data Retention” transmission and processing solution that is designed to avoid retention of any PHI.
    • The OpenAI API Platform adheres to SOC 2 Type 2 compliance standards.

The foundations of Capacity’s approach to Generative AI are as follows:

  • We do not utilize OpenAI’s LLM unless a customer specifically authorizes it
  • We do not sell or make available customer data to train LLMs
  • We do not train our internal LLM on customer data unless the customer requests that we do so in order to better train the model for its usage
  • We are compliant with HIPAA, GDPR, GLBA and other applicable data privacy regulations 
  • We have received a SOC 2, Type 2 report from an outside auditor
  • We conduct security awareness training for our team members on a regular basis
  • We perform an IT Risk assessment evaluating the risk of technology and security on an annual basis
  • We have created Generative AI tools in specific use cases designed to increase efficiency and accuracy in delivering answers provided to end-users and designed our platform so that that output suggested by using GPT and LLM is, by default, reviewed by our customers’ agents before being sent to end-users
  • When we incorporate LLMs to produce responses for our customers, the LLM is instructed by us to create responses which are tied to, for example, to documented guidelines or to a customer’s own data, rather than having the LLM produce responses (and potentially) hallucinate by pulling out of date or irrelevant words from the LLM generally

On October 30, 2023, the White House issued an Executive Order (EO) on the use and development of AI, setting out directions for federal agencies and the largest AI developers. Specifically, the EO instructs federal governmental agencies to use regulatory and enforcement tools to address the safety and national security of new LLMs; the privacy of Americans’ personal data; and the development of policies regarding the responsible use of AI in law enforcement, healthcare, the workplace; as well as initiatives to foster innovation and competition and collaborations with international AI regulatory efforts.  Capacity does not currently see this EO directly affecting our platform. However, we recognize and appreciate the need for responsible and trustworthy development and use of AI. We believe that our approach to AI and GPT is consistent with the EO’s principles. 

We welcome any questions or interest you have in AI, including GPT and LLMs.

For more information, please visit our Support site.



Was this article helpful?