Home Bots & BusinessFrench Startup Develops European AI Architecture Trained on Public Resources

French Startup Develops European AI Architecture Trained on Public Resources

by Marco van der Hoeven

Dragon LLM, a French artificial intelligence company formerly known as Lingua Custodia, has announced the development of a new AI architecture, marking what it describes as the first European-designed and -trained alternative to existing U.S. and Chinese models. The company’s foundational model, named Dragon, was developed without private fundraising and trained exclusively on European public supercomputers Leonardo and JUPITER.

The project was enabled by Dragon LLM’s selection as a winner of the European Large AI Grand Challenge in 2024, which granted the company access to high-performance computing resources managed by EuroHPC. According to the company, Dragon represents a hybrid AI architecture that diverges from the widely adopted Transformer architecture used by large language models such as GPT, Claude, and Llama.

The Dragon model was developed to offer equivalent performance while using significantly less computational power. Dragon LLM claims the model operates with one-third the compute resources of comparable systems and can be deployed on standard servers without requiring large-scale GPU infrastructure. The model is designed for energy efficiency and accessibility, aiming to lower the cost and technical barriers to AI deployment for small and medium-sized enterprises.

The initial Dragon model is built with 3.6 billion parameters. The architecture and a demonstration model have been made publicly available via the Hugging Face platform, with larger-scale versions expected to follow.

Dragon LLM positions its development as an effort to support European technological sovereignty by offering a locally designed, trained, and deployable AI system. The company, founded in 2011 and based in Paris, focuses on developing frugal AI models for business applications, particularly in sectors that require energy and resource efficiency.

The launch of Dragon comes at a time when concerns about the energy consumption of large-scale AI systems are growing across Europe, particularly as data center demands continue to rise. Dragon LLM reports that its model not only trains faster but also serves more users per unit of hardware than its U.S. and Chinese counterparts.

 

Misschien vind je deze berichten ook interessant