Home Bots & Business Interview: Can AI be trusted?

Interview: Can AI be trusted?

by Marco van der Hoeven

As artificial intelligence continues to permeate various sectors, one crucial question looms large: Can AI be trusted? Marc Mathieu, SVP AI Transformation at Salesforce, sheds light on the concept of trust in AI and the company’s commitment to embedding ethical considerations at the core of their AI systems.

Recently Salesforce launched an extensive suite of products designed to integrate AI into the workflow of customer relationship management, covering areas such as sales, service, marketing, and commerce. Salesforce recently launched the Einstein One platform, which combines data, AI, and CRM. Additionally, Einstein Copilot and the Einstein Copilot Studio were announced, tools to allow for a conversational AI interface within all CRM applications.

Last week Marc Mathieu, SVP AI Transformation at Salesforce, spoke at World Summit AI about the implications of AI. He focused on the concept of trust in AI. “I emphasized the importance of developing trustworthy AI systems that have ethical considerations at their core, and I highlighted Salesforce’s longstanding commitment to trust, which has been our top value since the company’s inception. We’ve always prioritized trust, whether it was helping businesses transition their data to the cloud or introducing predictive AI with the Einstein solution in 2016. Now, our commitment extends to generative AI.”


He also discussed the Einstein Trust Layer, a comprehensive suite of features ensuring secure data retrieval, dynamic routing, masking of personally identifiable information, and zero retention of both prompts and outputs generated by the Language Models (LLMs). This applies to Salesforce-hosted LLMs as well as partner LLMs like OpenAI.

“Toxicity detection an question handling are key elements we’ve incorporated. We’ve dedicated an entire product team to focus on these aspects, ensuring that we lay the foundation for trust with our customers. Moreover, my final point during the presentation emphasized viewing trust not just as a current feature or layer but as a roadmap. We’re evolving from predictive AI to generative, conversational, and eventually autonomous AI.”


In his interactions with CEOs and CXOs, three primary questions emerge. The first is ‘How do I start?’ Companies are keen to harness the productivity benefits of AI and want guidance on initiating their AI journey, ensuring their teams acquire the necessary skills and capabilities. The second question revolves around long-term strategy. They’re concerned about the future and how to craft a sustainable roadmap. The goal for many is to transition into what we term as an ‘AI-first’ approach.

The crux of their inquiries revolves around trust. How can businesses ensure that their actions today, which will undeniably influence tomorrow, don’t jeopardize their data or their customer’s data? This concern resonates deeply, transcending regulatory boundaries. As observed in one of the panels, many companies prioritize trust, particularly concerning their data and their customer’s data, as they recognize the inherent risks associated with breaches.


He continues: “Our approach to trust is fundamentally technical. I work closely with two primary teams to grasp our advancements in trust and relay that information to our customers. Firstly, there’s our ethics team. Established in 2018, we inaugurated the Office of Ethics and Human Use, appointing an Ethics and Human Use Officer. By 2019, they began publishing principles centered on cultivating trustworthy AI. Recently, they rolled out guidelines designed to aid businesses in safely and accurately implementing AI. These guidelines serve as a North Star, guiding companies toward ethical AI deployment.”

“Secondly, we have our product teams, consisting of a myriad of dedicated engineers. They concentrate on the functionalities I previously mentioned, such as secure data retrieval, PII masking, and zero retention. Each segment of our trust layer is fueled by a product team, ensuring these features are optimized for our clientele. Furthermore, our Ethical AI Council and Advisory Council provide invaluable feedback. Our commitment is unwavering – as challenges arise or as we identify potential risks, we persistently collaborate with both teams to uphold trust.”

Generative AI

AI, especially generative AI, has recently seen a rapid evolution. “The pace of advancements can be overwhelming. To address this, Salesforce took a decisive step this year to restructure its focus entirely on AI, embedding AI functionalities within our CRM platform. Historically, our CRM applications in sales, marketing, and other areas operated in silos. However, with AI’s integration, we’ve adopted a unified approach.”

“Now, we possess a foundational data and AI module applicable across all applications and workflows. This unified strategy is pivotal – it ensures a dedicated team is continually enhancing our trust and security layers. Moreover, we’ve embraced an open ecosystem, allowing customers to integrate their data from various sources and even incorporate their preferred Language Models (LLMs) if they align better with specific functions.”


Salesforce is actively involved in numerous AI councils, specifically those centered on ethics. “While we don’t drive regulation, our objective is to offer insights, both from our perspective and from our customers’, to shape future regulatory considerations. Our ethics team is deeply invested not only in data privacy but also in human rights. Our platform’s design emphasizes data protection and ensures the AI we develop promotes equality, diversity, and inclusion.”

“Another pivotal aspect is fostering honesty and transparency. We prioritize declaring the origin of data or information. Regarding the trust layer, which encompasses vital elements like diversity and inclusion, our primary challenge is the swift pace of technological evolution. Trust, by its very nature, is absolute. You either trust, or you don’t. With over two decades prioritizing trust and 14 years being recognized among the world’s most ethical corporations, we aim to ingrain a ‘trust-first’ mindset in AI development. We don’t claim to have all the answers, but our endeavor is to champion this mindset, laying the groundwork for standards in both the private and public sectors.”

“The emerging AI roadmap points towards greater reliance on autonomous AI. Imagine personal AI assistants that manage our repetitive tasks. I often say, ‘Anything that can be automated, will be’. This automation of mundane, repetitive tasks will liberate human capital, allowing us to focus on more creative, relationship-driven endeavors. The ultimate aim of AI is augmentation – enhancing human capabilities and redirecting our energies towards tasks that genuinely require human ingenuity.”

Misschien vind je deze berichten ook interessant