OpenAI and Scale have teamed up to make it easier for companies to use our advanced AI models. When it comes to deploying AI in production, companies want high performance, the ability to customize, and control over the AI. That’s why we recently introduced fine-tuning for GPT-3.5 Turbo, and we’re planning to bring it to GPT-4 this fall. Fine-tuning allows companies to personalize our powerful models using their own proprietary data, making them even more useful. And don’t worry, the data used for fine-tuning is owned by the customer and not used by OpenAI or anyone else to train other models.
In order to provide the best enterprise-grade functionality, we’re working with Scale as our preferred partner. Scale has expertise in securely and effectively leveraging data for AI, and they can help companies take full advantage of our fine-tuning capability. Scale customers can now fine-tune OpenAI models just like they would through OpenAI, plus they get the added benefit of Scale’s enterprise AI expertise and Data Engine.
Scale has already demonstrated the value of fine-tuning with GPT-3.5 for Brex. You can learn more about this partnership and its benefits for customers from Scale’s blog.
Overall, this partnership between OpenAI and Scale brings together the power of our advanced models with Scale’s expertise in enterprise AI. It allows companies to customize and control their AI models while benefiting from Scale’s knowledge and experience. Stay tuned for more updates on our fine-tuning capabilities!