Stellantis: SPOT.i Surpasses 3 Million Searches Since September 2025
Stellantis announces that SPOT.i, its internally developed AI-powered search engine, has exceeded 3 million natural language searches since its launch in September 2025. This technology, integrated into the SPOTICAR platform, transforms purchase intentions into personalized responses, marking a key milestone in the digital transformation of the European used car network.
An In-House AI Serving the European Used Car Market
SPOT.i centralizes data from all vehicles available for sale on the SPOTICAR network and incorporates automotive expertise to optimize the used car search experience. The engine translates natural language into technical search criteria without relying on restrictive filters. Since its launch in September 2025, more than 3 million searches have been conducted across Europe, allowing customers to discover a wider selection of vehicles and generating more qualified leads for the sales network. SPOTICAR, one of the main players in the European used vehicle market, is accelerating its digital transformation through a pragmatic strategy that gradually integrates artificial intelligence into customer interactions. The network now includes 3,000 sales points across Europe.
Expansion through Conversational AI and Technological Partnerships
In France, an experiment was also conducted on WhatsApp with a conversational agent developed in partnership with Salesforce. This agent understands the customer's intent expressed in natural language, translates it into technical criteria, and offers a selection of suitable vehicles available in the SPOTICAR network, providing a seamless customer experience and more qualified leads for the sales points. According to Alexandre Fils, global marketing director of SPOTICAR: 'Our strategy is to evolve SPOT.i through targeted and progressive improvements. In the long term, we aim to surpass the traditional model of digital classifieds and fully embrace the new customer experiences made possible by the emergence of large language models.'