Trending:

AI companies are finally looking at Small Language Models, and expect to make big bucks

FP Staff May 20, 2024, 16:10:53 IST

Most tech companies have spent a fortune on training their LLMs or Large Language Models. While LLMs take up exorbitant costs and are difficult to monetise, SMLs, or small language models are easy to make, scale well and can be monetised in a better way

Advertisement
While tech giants have mainly been focusing on LLMs till now, they have started looking at SLMs or Small Language Models with a renewed interest, and expect to monetise it better than LLMs. Image: AI generated.
While tech giants have mainly been focusing on LLMs till now, they have started looking at SLMs or Small Language Models with a renewed interest, and expect to monetise it better than LLMs. Image: AI generated.

Artificial intelligence (AI) companies have been investing heavily in large language models to power generative AI products, but there’s a new trend on the horizon: small language models.

Companies like Apple, Microsoft, Meta (formerly Facebook), and Google are now focusing on developing smaller AI models with fewer parameters, yet still packing a punch in terms of capabilities.

The shift towards smaller models is driven by several factors. One key motivation is to address the concerns surrounding the adoption of large language models.

STORY CONTINUES BELOW THIS AD

While large models excel in performance and complexity, they come with significant costs and computational requirements. This has made them less accessible to businesses, especially those with budget constraints or concerns about data privacy and copyright liability.

To bridge this gap, tech giants are introducing smaller language models as more affordable, energy-efficient, and customizable alternatives. These models require less computational power to train and operate, making them a viable option for a wider range of applications.

Additionally, they offer the advantage of processing tasks locally on devices, which is appealing to organizations keen on maintaining control over sensitive data.

The smaller models are gaining traction across various sectors. Legal professionals, such as Charlotte Marshall from Addleshaw Goddard, recognize their potential to help businesses navigate regulatory requirements and cost concerns associated with larger models.

Moreover, the ability of small models to run on mobile devices opens up new possibilities for on-the-go AI applications.

Major players like Meta and Google are leading the charge in developing small language models with impressive capabilities.

For instance, Meta’s Llama 3 boasts an 8 billion parameter model that rivals larger models like GPT-4. Similarly, Microsoft’s Phi-3-small model, with 7 billion parameters, outperforms previous versions of OpenAI’s model.

STORY CONTINUES BELOW THIS AD

It’s not just the tech giants that are embracing smaller models. Start-ups like Mistral are also making strides in this space, offering advanced capabilities tailored to specific applications. Furthermore, device manufacturers like Google and Samsung are embedding small AI models directly into their products, further expanding their accessibility.

While the trend towards smaller models is gaining momentum, OpenAI remains committed to developing larger models with enhanced capabilities. CEO Sam Altman acknowledges the demand for top-performing models but stresses the importance of offering options tailored to different needs and preferences.

The rise of small language models marks a significant shift in the AI landscape. These models offer a more accessible, cost-effective, and privacy-friendly alternative to their larger counterparts, driving their adoption across various industries.

As businesses continue to seek AI solutions, the versatility and accessibility of small models are expected to play a pivotal role in shaping the future of AI technology.

(With inputs from agencies)

STORY CONTINUES BELOW THIS AD
Home Video Shorts Live TV