The emergence of AI-native companies

Ludovic Desgranges
Ludovic Desgranges
January 7, 2025

Employees don't have as much access to AI as they'd like

Two and a half years after ChatGPT came into our lives, the emergence of LLMs in our everyday software and tools still seems limited. This is demonstrated by the articles and content that continue to appear on "how to integrate AI into your business".

Yet there's no lack of desire. ABoston Consulting Group study of 13,000 employees worldwide claimed in2024 that 42% of employees were confident about the impact of AI on their work. Another study by Slack expressed that, among those who had used AI at work, 71% said these technologies improved their productivity.

But we have to admit that adoption in the office is not progressing as quickly as expected. This is due to the difficulty many of our everyday B2B software publishers have in integrating artificial intelligence into their already complex technology stack.

Publishers struggle to integrate AI into their tools

Integrating an LLM means rethinking all the software's functionalities and user interface. Which existing functionalities can be better realized by AI? What new functionalities can it bring? How can we integrate the ability of LLMs to deliver a more personalized and adaptive user experience? These are just some of the questions to ask before getting started, and the iteration choices to be made once the vision is clear.

It's always complicated to destroy something that works. Going backwards is the best way to bounce back. And today, this context is turning SaaS markets upside down, with the arrival of new, ultra-argile players who don't have to make these choices; AI-native companies.

AI-native companies reshuffle the SaaS deck

Less in a hurry to find new markets (as was the case with FinTech) than to come and do better than incumbent players in very established markets, AI-native companies have undeniable advantages:

First, a change in technological vision: their entire technology stack is based on LLMs. Each key stage of information processing is handled by proprietary models, capable of vectorizing the semantic universe of data. This enables greater understanding of the data, and therefore better cross-referencing between them.

Secondly, advanced task automation:LMs are particularly effective at understanding and creating information flows, thus reducing the number of manual tasks to be carried out by the user. This means that the entire information workflow is automated by AI. The human being is only involved at two ends of the workflow: at the beginning, to define the tool's working axes, and at the end, to analyze the results returned by the tool.

AI-native companies develop their own RAG systems (augmented retrieval generation), which enable the tool to improve as it interacts with users, and refine its recommendations.

Finally, a change in usage: the user can ask the tool specific questions to uncover insights, instead of digging into the data and manually extrapolating information of interest. No longer just reading and clicking. We talk. We interact.

Being AI-native saves time and ensures continuous improvement

Developing an AI-native solution means less time spent searching for information, and more time spent analyzing and making decisions. Errors are also limited: no more line breaks on a data table, no more selecting the wrong data filters, no more errors in task settings.

Above all, it gives us the ability to take full advantage of the continuous improvements offered by the major LLM vendors, because their improvements become ours as soon as they become available: instead of having to reconsider our choice of LLM when the small brick of tasks used has become more complex and now serves its original purpose.

It's a guarantee of quality and innovation, both for your company and for the solution's users.