WizardCoder: Revolutionizing Code Creation with Code Evol-Instruct and Achieving SOTA Performance

AI Language Models Achieve State-of-the-Art Performance in Code Generation

Large Language Models (LLMs) like OpenAI’s ChatGPT have gained popularity for their impressive performance. These models are trained on massive amounts of internet data and fine-tuned with specific instructions to achieve state-of-the-art performance in various tasks. Code LLMs, specifically designed for activities involving code, have also shown remarkable performance. These models are pre-trained on code data and excel in code-related tasks.

Improving Fine-Grained Instruction Tailoring in Code Generation

Prior Code LLMs have mainly focused on pre-training, but there is a need for more investigation into fine-grained instruction tailoring in the code domain. To improve the generalization skills of LMs, researchers have experimented with instruction tweaking. OpenAI’s InstructGPT and Alpaca have utilized specific instructions to verify conformity with users’ objectives. However, these approaches should have given more consideration to the code domain. Inspired by the Evol-Instruct approach, researchers from Microsoft and Hong Kong Baptist University have modified the StarCoder Code LLM to enhance its capabilities.

The WizardCoder: Achieving State-of-the-Art Performance

The researchers used code-specific Evol-Instruct to produce detailed code instruction data and fine-tuned StarCoder to create WizardCoder. Experimental findings from four code-generating benchmarks, including HumanEval, MBPP, and DS-100, demonstrate that WizardCoder outperforms all other open-source Code LLMs. WizardCoder even surpasses major closed-source LLMs like Claude and Bard in terms of code creation.

Key Contributions of this Work

Here are the key contributions of this research:

  • Creation of WizardCoder, an improved version of the open-source Code LLM StarCoder, using Code Evol-Instruct
  • WizardCoder outperforms all other open-source Code LLMs and major closed-source LLMs in terms of code creation

For more details, refer to the paper and visit the Github link. Credit goes to the researchers involved in this project. Don’t forget to join our ML SubReddit, Facebook Community, Discord Channel, and subscribe to our Email Newsletter to stay updated with the latest AI research news and projects.

If you like our work, follow us on Twitter.

Source link

Stay in the Loop

Get the daily email from AI Headliner that makes reading the news actually enjoyable. Join our mailing list to stay in the loop to stay informed, for free.

Latest stories

You might also like...