Unlocking the Power of Large Language Models: Enhancing Skills through Data

New AI Technique to Accelerate Learning: SKILL-IT

Large language models (LMs) are amazing at coding, creating art, and talking with people. But the key to their skills lies in the data they are trained on. Enhancing this training data can unlock even more capabilities. However, choosing the right data from a massive corpus can be challenging. Most existing algorithms rely on guesswork and heuristics. What we need is a formal framework that describes how data affects an AI model’s abilities and how to use this data to improve its performance.

That’s where SKILL-IT comes in. Inspired by how humans learn, researchers from Stanford University, the University of Wisconsin-Madison, Together AI, and the University of Chicago have developed an online data selection system. SKILL-IT takes advantage of skill orderings to rapidly acquire new abilities. It provides a framework that links data to LM training and behavior.

The researchers define a skill as a unit of behavior that an LM can learn using a specific slice of data. They discovered that skills are not learned in isolation but are interconnected. In other words, learning a skill becomes easier when its prerequisite skills are also mastered. By understanding these ordered skill sets, we can optimize the training process and speed up learning.

SKILL-IT offers two approaches for selecting data: skill-stratified sampling and online generalization. Skill-stratified sampling allows us to explicitly optimize learning by uniformly sampling relevant skills. However, it oversamples skills that may have already been acquired earlier in the training process. To address this, SKILL-IT gives higher weight to yet-to-be-learned skills or influential prerequisite skills.

The researchers tested SKILL-IT on artificial and real datasets, and the results were impressive. They showed significant improvements in accuracy and lower loss compared to random sampling and other selection methods. SKILL-IT was able to achieve the lowest loss on most evaluation skills across various tasks.

To demonstrate the effectiveness of SKILL-IT, the researchers applied it to the RedPajama dataset, which contains over 1.2 trillion tokens. They continuously pre-trained a 3 billion parameter model using the data selected by SKILL-IT. The results showed that SKILL-IT outperformed uniform sampling in terms of accuracy.

This new technique has the potential to revolutionize the training of large language models. It offers a way to optimize the learning process and improve performance. By understanding the order in which skills are learned, we can train AI models more efficiently. SKILL-IT brings us one step closer to creating AI systems that can learn and adapt more like humans.

Check out the research paper for more details and join our ML SubReddit, Facebook Community, Discord Channel, and Email Newsletter for the latest AI research news and projects.

About the Author:
Aneesh Tickoo is a consulting intern at MarktechPost. He is currently pursuing his undergraduate degree in Data Science and Artificial Intelligence from the Indian Institute of Technology(IIT), Bhilai. He loves working on projects that harness the power of machine learning, especially in image processing. Aneesh is passionate about collaborating with others and building innovative solutions.

Source link

Stay in the Loop

Get the daily email from AI Headliner that makes reading the news actually enjoyable. Join our mailing list to stay in the loop to stay informed, for free.

Latest stories

You might also like...