Factmata, a UK based AI platform tackling online misinformation at scale, launches its new self-service tool to help future-proof the internet from the billion-dollar problem of harmful narratives and fake news. Factmata’s AI technology assesses the level of risk narratives pose to brands, identifies the influencers behind the narrative and enables organisations to monitor how those narratives emerge, grow and change over time.
The newly developed tool uses artificial intelligence to automatically group similar online opinions into ‘clusters’. Users can track misinformation, disinformation and false narratives that threaten to cause reputational harm, all in one easy to use dashboard.
Antony Cousins, CEO at Factmata comments: “Factmata’s narrative monitoring tool takes the world a step closer to policing their corner of the internet. With so much misinformation and disinformation about issues, such as climate change or Covid, we have lost our ability to communicate without fuelling dangerous false narratives. Poorly researched and inadequately verified content is growing, and so is society’s reliance on it as news.”
“I don’t think I’d be over dramatic in saying that democracy is at stake if we don’t get on top of this problem. The biggest implication of bias, misinformation and harmful content in online conversations is the breakdown of our ability to communicate and share information across beliefs and political divides. At Factmata, we believe in generating a culture of change on how news is monitored. Flagging harmful content is essential, not just for brand safety but wider internet safety, and we must act now before it’s too late.”
PR and marketing professionals, brands and even journalists can save time by analysing thousands of Tweets, Facebook posts and news articles at scale to identify and track the narratives that threaten to cause reputational harm or misinform the public.
With user-generated content continuing to grow exponentially, social media becoming prominent in everyday lives and the metaverse set to only increase our exposure to social content, organisations need to reduce the impact of harmful narratives before they hit mainstream audiences.