Multiverse Computing, a Spanish firm working at the intersection of artificial intelligence and quantum-inspired computation, has raised approximately $215 million (EUR 189 million) in a Series B funding round. The investment will support the development and deployment of the company’s model compression platform, aimed at improving the speed and efficiency of large language models (LLMs).
The firm’s core technology, known as CompactifAI, is based on tensor networks—a method drawn from quantum physics. This approach enables the compression of AI models by up to 95% of their original size, while maintaining their performance levels. The result is significantly faster processing and substantial cost savings in running these models.
Multiverse is offering lightweight versions of several open-source LLMs, such as variants of Llama and Mistral. These compressed models are designed for deployment via cloud platforms like AWS, or for use in on-premise environments. According to internal benchmarks, they deliver up to a 12-fold performance improvement and can reduce operational costs by up to 80%.
One notable aspect of the technology is its ability to make advanced AI models compatible with smaller, less powerful devices, including personal computers, smartphones, embedded systems in vehicles and drones, and even compact systems like Raspberry Pi.
The company was co-founded by Román Orús, a theoretical physicist known for his work in quantum simulations using tensor networks, and Enrique Lizaso Olmos, who has a background in both academia and banking.
This funding round was led by Bullhound Capital and included contributions from investors such as HP Tech Ventures, Forgepoint Capital, Toshiba, and Santander Climate VC, among others. With this latest round, the company has now raised around $250 million in total.
Multiverse Computing reports a global client base of over 100 organizations, spanning sectors such as energy, finance, and manufacturing.