Home AI News PockEngine: Speeding Up On-Device Training for AI Chatbots and Smart Keyboards

PockEngine: Speeding Up On-Device Training for AI Chatbots and Smart Keyboards

0
PockEngine: Speeding Up On-Device Training for AI Chatbots and Smart Keyboards

MIT Researchers Develop PockEngine, an On-Device Training Method for AI Models

MIT researchers have developed a new technique called PockEngine to enable deep-learning models to efficiently adapt to new sensor data directly on an edge device. With PockEngine, devices like smartphones and other edge devices can now perform on-device training without having to upload user data to cloud servers, which can pose a security risk.

What is PockEngine and how does it work?

PockEngine determines which parts of a huge machine-learning model need to be updated to improve accuracy, and only stores and computes with those specific pieces. This speeds up on-device training by up to 15 times on some hardware platforms without any loss of accuracy. The method also enables a popular AI chatbot to answer complex questions more accurately, and slashes the amount of memory required for fine-tuning.

How does PockEngine speed up the training process?

PockEngine fine-tunes each layer, one at a time, on a certain task and measures the accuracy improvement after each individual layer. It identifies the contribution of each layer and automatically determines the percentage of each layer that needs to be fine-tuned. The technique also generates a pared-down graph of the model to be used during runtime, further improving efficiency.

What are the applications of PockEngine?

The technique applied to deep-learning models on different edge devices, including Apple M1 Chips and digital signal processors in smartphones, performed on-device training up to 15 times faster without any loss of accuracy. It also significantly reduced the amount of memory required for fine-tuning. Moreover, when applied to large language models, PockEngine improved the performance of models in tasks involving text and image processing.

Overall, PockEngine addresses efficiency challenges posed by the adoption of large AI models across diverse applications in different industries. It not only promises better edge applications, but also lowers the cost of maintaining and updating large AI models in the cloud. This work was supported by several research-based organizations, including MIT-IBM Watson AI Lab, MIT AI Hardware Program, MIT-Amazon Science Hub, the National Science Foundation (NSF), and Qualcomm Innovation Fellowship.

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here