Tensoic has recently introduced Kannada Llama to address the limitations of language models. This language model focuses on proprietary characteristics, computational resources, and barriers to broader research community contributions. Emphasizing the importance of open models to facilitate innovation in natural language processing (NLP) and machine translation. The Kannada Llama aims to spread Llama-2 powerfully for less important Indian languages, especially Kannada, and incorporate modification of the vocabulary of the model through a phrase fragment tokenizer. This effective training method enables computational training of LLMs low-level objects.
### Efficiencies of Kannada Llama-2 Vocabulary
The proposed method enhances the efficiency of Llama-2’s vocabulary for processing Kannada texts. The sentence fragment tokenizer is trained on the Kannada text corpus, and integrated with the existing Llama-2 tokenizer. Pretraining is done on about 600 million Kannada tokens from CulturaX Dataset, utilizing Nvidia A100 80GB instances. Researchers use low-level optimization (LoRA) during pretraining to conserve the weight of previously trained models and reduce the total number of trainable parameters.
This new model addresses the challenges associated with LLMs, emphasizing the importance of using open-source models to foster innovation, especially for less important Indian languages. The comprehensive approach makes Kannada Llama a substantial step that addresses the limitations of existing models. The commitment to modeling openness and collaboration with organizations reflects broader objectives, contributing to the development of state-of-the-art language models.
Pragati Jhunjhunwala, Find the original article [here](https://www.tensoic.com/blog/kannada-llama)
🧑💻 [FREE AI WEBINAR]’LangChain for Multimodal Apps: Chat With Text/Image Data’ (Jan 26, 2024)