NICE unveiled a Robo Ethical Framework promoting responsibility and transparency in the design, creation and deployment of AI-powered robots. NICE’s ethical guidelines set the standard for designing, building and deploying robots, and form the basis for solid and ethically sound robot and human collaboration. Comprising a set of five guiding principles, NICE’s Robo-Ethical Framework underlies every interaction with process robots – from planning to implementation – and drives ethically sound human-robot partnerships in the workplace.
The rapid acceleration of AI has driven the proliferation of robots in various roles across both home-based work and business environments. With their integration, robots are being granted more access to business and customer data. Yet ethical standards that provide guidance around the development and application of robots and AI have been lacking. There has been much discourse around the topic in the robotics industry but steps to formalize guidelines on an industry level have yet to be taken.
By introducing the industry’s first set of standards to self-govern the creation of responsible AI-driven robotics, NICE commits to ensuring transparent design, development and implementation of process automations as is already inherent to its RPA platform. Deeply rooted in its product capabilities, NICE’s ethical framework is shared with every customer along with their robotic license. While the ultimate determination of what is beneficial to humanity is subjective and contextually rooted, NICE aims to keep the importance of ensuring a positive impact in RPA top of mind in the industry. The five guiding principles that are intended to ensure good ethical standards, underlying the robot-human relationship in the workplace include the following:
- Robots must be designed for a positive impact: Robots must be built to contribute to the growth and well-being of the human workforce. With consideration to societal, economic, and environmental impacts, every project that involves robots should have at least one positive rationale clearly defined.
- Bias-free robotics: Personal attributes such as color, religion, sex, gender, age and other protected status is eliminated when creating robots so their behaviour is employee agnostic. Training algorithms are evaluated and tested periodically to ensure they are bias-free.
- Robots must safeguard individuals: Careful consideration is given to decide whether and how to delegate decisions to robots. The algorithms, processes, and decisions embedded within robots must be transparent, with the ability to explain conclusions with unambiguous rationale. Accordingly, humans must be able to audit a robot’s processes and decisions and have the ability to intervene and redress the system to prevent potential offenses.
- Robots must be driven by trusted data sources: Robots must be designed to act based upon verified data from trusted sources. Data sources used for training algorithms should be maintained with the ability to reference the original source.
- Robots must be designed with holistic governance and control: Humans must have complete information about a system’s capabilities and limitations. Robotics platforms must be designed to protect against abuse of power and illegal access by limiting, proactively monitoring, and authenticating any access to the platform and every type of edit action in the system.