Distilled LLMs open doors for Indic advancements
By
CS Mathew
DeepSeek’s reported creation of its large language model using knowledge distilled from OpenAI’s o1 model has captured the attention of the Indian tech community for one very particular reason: it exemplifies a cost-effective and efficient pathway to powerful AI. “DeepSeek, by leveraging knowledge distillation, proves that smart innovations don’t always need massive computational power,” says Ayush Gupta, co-founder and CTO of Enlog.
You may also like
Must Read Articles
M | T | W | T | F | S | S |
---|---|---|---|---|---|---|
1 | 2 | |||||
3 | 4 | 5 | 6 | 7 | 8 | 9 |
10 | 11 | 12 | 13 | 14 | 15 | 16 |
17 | 18 | 19 | 20 | 21 | 22 | 23 |
24 | 25 | 26 | 27 | 28 |