Selecting an AI Model
In this short guide, we will walk you through the process of picking an AI model on the Capacity platform. This decision is essential as it spells out the capabilities and limitations of your bot.
Getting Started
- Accessing the Capacity Console: To initiate the process, you need to log into the Capacity Console.
- Exploring the AI Settings:Once logged in, navigate to the AI settings option, which contains a list of different AI models that you can use.
Understanding the AI Models
Before you make your decision, it's important to understand the capabilities of the models available to you. Notably, the AI model you choose will be used whenever you need to use generative AI inside of your console, such as drafting with Autopilot or adding it via your chatbot using Guided Conversations. To learn about our approach to AI, click here.
Each of these models has unique offerings that cater to different user needs. Ensure to consult with your security team to make the right choice.
Here’s a quick rundown of your options:
Capacity's Internal AI Model Capacity AI
- Type: Internal, in-house foundational model (LLAMA 3.1 based)
- Best for: Standard question answering with robust privacy (SOC 2 & HIPAA certified)
- Token Limit: 128K
- Highlights: Reliable, compliant, and secure—ideal if you require complete data privacy and internal controls.
- Data Retention: Zero Data Retention
Latest GPT-4 (Standard)
- Provider: OpenAI (via Capacity)
- Best for: Handling complex customer inquiries that need detailed responses and nuanced understanding.
- Token Limit: Up to 1 million tokens
- Highlights:
- Advanced comprehension and generation
- Expanded context window—great for support scenarios involving large documents or histories
- Data Retention: Zero Data Retention (We have signed a BAA with OpenAI pursuant to which OpenAI may create, receive, maintain or transmit PHI from Capacity customers. Our OpenAI access is through its “Zero Data Retention” transmission and processing solution that is designed to avoid retention of any PHI. It's the top choice for extensive LLM functionalities. )
Latest GPT-4 (Mini)
- Best for: General customer support, balancing strong performance with efficiency
- Highlights: Maintains GPT-4 capabilities in a more resource-conscious format—optimal for day-to-day interactions.
- Data Retention: Zero Data Retention (We have signed a BAA with OpenAI pursuant to which OpenAI may create, receive, maintain or transmit PHI from Capacity customers. Our OpenAI access is through its “Zero Data Retention” transmission and processing solution that is designed to avoid retention of any PHI. It's the top choice for extensive LLM functionalities. )
Latest GPT-4 (Nano)
- Best for: Simple, high-volume queries needing fast, concise responses
- Highlights: Optimized for speed and efficiency—Capacity’s fastest model for straightforward tasks.
- Data Retention: Zero Data Retention (We have signed a BAA with OpenAI pursuant to which OpenAI may create, receive, maintain or transmit PHI from Capacity customers. Our OpenAI access is through its “Zero Data Retention” transmission and processing solution that is designed to avoid retention of any PHI. It's the top choice for extensive LLM functionalities. )
Latest OpenAI o-Mini (Reasoning)
- Best for: Tasks requiring advanced reasoning and problem-solving (complex coding, mathematical or logical tasks, strategic planning)
- Highlights: Superior cognitive processing for intricate queries and detailed reasoning.
- Data Retention: Zero Data Retention (We have signed a BAA with OpenAI pursuant to which OpenAI may create, receive, maintain or transmit PHI from Capacity customers. Our OpenAI access is through its “Zero Data Retention” transmission and processing solution that is designed to avoid retention of any PHI. It's the top choice for extensive LLM functionalities. )
Why These New Models are Awesome:
- Enhanced Coding Capabilities: The GPT-4.1 models outperform even GPT-4o and GPT-4.5 in coding by 21–27%, making your bot more adept at tackling technical issues.
- Expanded Context: Support for up to 1 million tokens enables the bot to manage large information sets and deliver context-rich responses.
- Improved Instruction Following: These models interpret your prompts and rules more literally, producing more precise, user-specific interactions.
Selecting the AI Model
- Once you have chosen your model, the platform allows you to select it in the AI settings.
- After choosing your model, click "Save."
- You can verify how the model performs in the Help Desk AI toolkit, AI Inquiry Generator, Guided Conversations.
While the Capacity platform offers you the flexibility to switch AI models after you have initially selected one, keep in mind that this switch can lead to significant challenges.
The type of AI model you choose has profound impacts on how your workflows, guided conversations, and overall interface operate. Changing models may necessitate reconfiguring existing workflows and guided conversations to adapt to the capabilities and limitations of the new model.
For instance, if you initially selected Capacity's Internal AI Model and then later decided to change to Capacity's External AI Model, you may need to adjust your workflows to accommodate the increased token limit. These modifications may not only be time-consuming but can also disrupt the smooth functioning of your system and misalign your previously set preferences.
On a similar note, manual adjustments can introduce a risk of errors and inconsistencies, leading to suboptimal performance of the platform. Remember, consistency and stability in model selection can be key to maintaining a smooth platform performance and optimal user experience.
Please note that changing the model later on can lead to difficulties in reconfiguring your existing workflows and guided conversations. Therefore, it's prudent to stick to the model you've chosen from the beginning.
You've Selected Your AI Model—What Comes Next?
Congratulations! Having successfully selected your AI model, you've made a significant step towards customizing and enhancing your Capacity platform capabilities. However, the journey doesn't end here.
Next, you'll need to prepare your documents for integration with your selected model. This process involves organizing your documents into themes or specific folders for effective indexing and appropriate audience delivery. For instance, you might want to create folders for different departments such as HR, Sales, Operations, and Customer Support. This robust organization facilitates intelligent document retrieval and improves your Capacity platform’s performance.