An international study led by researchers at Tampere University indicates that public trust in artificial intelligence systems and the companies developing them is influenced more by users’ sense of relatedness and positive engagement than by perceptions of technical competence alone.
As AI technologies become more integrated into daily life and public institutions, the research examined how trust is formed through a socio-psychological lens. Rather than focusing exclusively on technical performance or user skill levels, the study assessed how basic psychological needs—relatedness, autonomy and competence—alongside attitudes toward AI and self-efficacy in its use, affect levels of trust.
The findings show that positive attitudes toward AI and a sense of connectedness when interacting with technology consistently predict trust across countries. By contrast, factors such as technological autonomy, competence and AI self-efficacy were associated with trust only in certain national contexts.
“As AI systems increasingly mediate how people work, communicate and access information, trust is no longer just about whether a technology functions correctly,” said doctoral researcher Anica Cvetković of Tampere University. “Our findings show that trust is strongly linked to whether people feel socially and psychologically supported when using AI, and this pattern holds across different cultural and technological contexts.”
The study is based on survey data from 11,259 participants in 12 countries across six continents collected in 2024. The results were published in the journal Behaviour & Information Technology. According to the researchers, the dataset represents one of the broadest cross-national examinations of how trust in AI systems and AI-driven companies develops.
In addition to assessing trust in AI technologies generally, the study examined perceptions of major technology companies, including social media platforms that rely extensively on AI systems. The researchers found that trust in AI and trust in corporate actors are closely connected.
By including participants from regions with varying technological infrastructure and cultural norms, the study also explored differences in trust formation across global contexts. The findings suggest that everyday experiences with technology—and whether those experiences are perceived as inclusive and empowering—play a significant role in shaping trust.
“Trust in artificial intelligence, and particularly in the companies developing these systems, is becoming increasingly important,” said Professor Atte Oksanen, one of the lead researchers. “AI now influences how we work, communicate and access essential services. Recent changes in global politics have also underlined the need for Europe to develop strong and reliable alternatives of its own. Ensuring trustworthy and transparent development is therefore not only a technological priority, but also a strategic one for our societies.”
The researchers state that the results highlight the importance of human-centered and culturally sensitive approaches to AI design and governance. They argue that improving technical performance or user skills alone is unlikely to foster trust if individuals feel disconnected or excluded in their interactions with AI systems.
“If AI is to be accepted as part of everyday life and public institutions, trust must be built on more than efficiency,” Oksanen said. “Understanding how people relate to AI – and to the companies that develop it – is essential for the legitimacy of AI-driven societies.”
