Blog

Plateaus, problem solving, and (hyper) personalisation: our AI trends for 2025

New year, new ideas. 2025 is upon us, and with it the chance to make a few speculative predictions about what the next 12 months might hold. From retail media to data, our subject matter experts have been hard at work trying to narrow down the biggest trends for the year ahead.

This time round, it’s the turn of Ross Williams—Lead Data Scientist—who’s laid down what he believes will be five of 2025’s most significant AI trends. Let’s dive in.

 

The end of scaling?

We end 2024 with dramatic hearsay leaking out of the world’s top AI labs. Whisper it quietly, but has the transformer scaling paradigm finally plateaued?

Scaling is the phenomenon by which throwing more data and more computing power into the mix has consistently produced better and better Large Language Models (LLMs). Simple as it might sound, this is by no means a trivial or guaranteed result—and the approach has driven Generative AI’s (GenAI) spectacular success over many years (and a pace of improvement that has been hard to keep up with!).

What would it mean if the scaling paradigm is coming to an end, though? What if bigger and better models were no longer always just around the corner? To achieve the ongoing improvements that we’ve grown used to, researchers would have to find new directions in which to take the field.

The trends we consider below mesh well with this backdrop and consider different paths to value—ones that aren’t just focused on “bigger and better”. And who knows, maybe scaling will continue to be the golden goose after all.

 

Transformer-based models meet industry data

The unprecedented success of LLMs comes down to the transformer architecture and the magic of scaling. As these pioneer models potentially plateau (see above), it could be time for smaller, niche models to take centre stage. An added twist here could see companies leveraging their proprietary data sources to create GenAI-based approaches that are laser-focused on their domain of expertise.

This could take many forms: deploying RAG pipelines, or fine-tuning a model on a company’s secret-sauce documentation and code, for instance. Or, perhaps, it could involve a step away from natural language entirely and see different types of data thrown at transformer models instead.

There are a few reasons as to why this is a powerful direction for GenAI to move in. Smaller models designed for specific tasks not only unlock new capabilities for companies, but are also cheaper to train and deploy. The lower computational cost is good news for the bottom line and for the environment, too.

Efficient deployment and operation of these models is another growing concern for any company working in this area, as is data privacy and security. Self-hosting compact, specialised models— rather than relying on the ever-larger third-party generalist monsters—sidesteps fears over what information can or can’t be feed to a foundation model.

At dunnhumby we are particularly excited by our work on basket transformers, where we train transformer-based models on product-basket data, enabling a new type of retail foundation model.

 

Multimodal models open up the possibilities of hyper personalisation

Today, data scientists typically build models that consume a single type of data. That could be forecasting models that purely use tabular data, for instance, or sentiment analysis of social media posts based on text data. In the future, models that natively combine disparate types of data and leverage them together—“multimodal models”—will be more prevalent.

One type of multimodal model that is already well-established is text-to-image generation, but this example is only the tip of the iceberg when it comes to multimodality. It’s hard not to believe that the trend here will be towards “everything-to-vec”, where any conceivable type of input will end up being fed into increasingly multimodal models. Text, image, audio, video, sensor, tabular data, and more will all be on the table.

These Frankenstein’s monster models will unlock not just new abilities, but increasingly enable more natural and context-aware interactions with AI. That said, I’d be lying if I claimed I haven’t rolled my eyes at some of the storyboard narratives conjured up by consultants, with users asking their phone to update and run predictive models!

What do these multimodal models get you in a retail setting? The first layer of benefits to be unlocked may be achieving higher performance on our traditional data science problems. You don't need to know how these complicated neural networks work under the hood to appreciate a key conceptual benefit: different types of data carry different information. Often, machine learning tasks see boosts in performance when you consider several inputs that are uncorrelated with each other, that is, reflect different aspects of the same real-world problem.

Of course multimodal models won’t just boost performance on current retail tasks, but also open up problem solving of the sort that can’t be done today. Consider generating a media banner for a product based on a text description—that’s one use case already happening today. But why not take this a step further? A multimodal approach could allow you to create a product-centric media banner for a specific customer, all based on their purchase history or website touchpoints. An era of hyper personalisation beckons.

 

Reasoning models help to solve the problem of problem solving

A well-publicized weakness of the current generation of LLMs is their capacity to hallucinate and spew erroneous output. At their worst, these models can return convincing answers containing subtle errors—a productivity-destroying rather than -enhancing event for a user.

As such, the top AI labs have strived to create complementary approaches that have a more logical grounding, providing stronger problem-solving abilities. Interestingly this goal of a reasoning model has been tackled from some quite different directions.

In July 2024, Google Deepmind used a neuro-symbolic approach to create a reasoning model capable of achieving a Silver Medal in the International Maths Olympiad. The capabilities of this model are quite distinct from a LLM—it was able to write a 96-step proof to solve a tricky geometry problem. More recently, OpenAI introduced their o1 model family. Again, the goal here is a model with advanced reasoning and chain-of-thought ability.

Whatever the route to problem-solving models, they offer big upsides. If training-time scaling has plateaued, it makes sense to push performance along other dimensions. In the case of the o1 models, for example, performance can be scaled with “thinking time”. Another intriguing use case would deploy a reasoning model as an independent smart tutor to boost LLM training, allowing automated, logically astute marking of LLMs’ (occasionally feverish) responses.

 

In quantum, the march towards error-corrected devices continues

What can we expect in 2025 for this esoteric field that edges slowly towards business impact?  2024 has seen mixed fortunes for quantum startups. While the best have thrived, some companies that went public during the Covid-tech IPO frenzy are now nothing more than penny stocks.

Alongside this drama, though, there have been undeniably impressive achievements—particularly on the hardware front, as the field abandons any remaining belief that small noisy devices can create advantage.    Rather the quantum community has set its sights on larger, error-corrected devices.  Technically (much) harder to achieve, yes, but with a guaranteed payoff of unlocking currently impossible computations.

Since this blog is about AI, I should also note that quantum computing’s impact on fields like GenAI may be limited for a long time (though the converse is not true; AI will likely help the development of quantum computing a lot).

One of the first areas where we will see quantum advantage in industry will likely be on optimisation problems, and you can check out our work with Durham University on this front here and here.

 

The latest insights from our experts around the world

How can customer-centric retailers become AI-ready?
First annual Retailer Preference Index (RPI) for Canada
customer first data science analytics & machine learning services
Ready to get started?

Speak to a member of our team for more information

Contact us