Take a human approach
When developing and deploying AI systems, it’s crucial to prioritize the needs, values, and well-being of the humans that will be using them. The key to this approach is designing experiences that are transparent, ethical, and accountable. Ultimately, the goal is to create technology that serves humanity and aligns with company goals and values. For a primer on using or working with ServiceNow’s AI technology, check out Responsible AI Guidelines 2024: A Practical Handbook for Human-Centered AI.
Make informed AI decisions
Asking the following questions will help ensure that the addition of AI to an experience is well-considered, user-centered, transparent, and aligned with the needs and expectations of the target personas.
Is AI really needed to solve the problem?
This question helps ensure that AI is not implemented for the sake of using the technology but rather because it genuinely addresses the problem at hand more effectively than other methods. It encourages critical thinking and prevents unnecessary or ineffective use of AI.
What value will AI provide to users?
Knowing how users perceive and evaluate the value of the AI solution helps in aligning the design and functionality to meet their expectations. It also ensures that the AI solution provides meaningful benefits and addresses their specific pain points.
Do people understand they are interacting with AI?
Transparency is crucial in AI experiences. Users should be aware that they are interacting with AI and understand the limitations and capabilities of the technology. It helps manage user expectations and builds trust in the AI system.
Is the experience transparent, traceable, and contestable (TTC)?
TTC ensures that users have visibility into how AI decisions are made, can trace the outcomes back to AI processes, and have the ability to contest or provide feedback on AI-generated outputs. It promotes accountability, trust, and user empowerment.
Does the experience empower users throughout their journey?
An AI experience should empower users by providing them with the necessary information, control, and decision-making capabilities. It ensures that users feel in charge and can make informed choices based on their needs and preferences.
Has security and risk mitigation been designed into the solution from the start?
Considering security and risk mitigation from the beginning helps in proactively addressing potential vulnerabilities and ensuring the AI solution operates in a secure and responsible manner. It also helps protect user data, privacy, and prevents harmful outcomes.
Have potential risks been identified and addressed?
Identifying and addressing the risks associated with AI helps in mitigating potential negative impacts on users. It ensures that the AI solution is designed to prioritize user well-being, avoid bias, and prevent any adverse effects on individuals’ well-being or society.
Has the experience been tested with a diverse audience?
Including diverse groups in user research and testing helps uncover biases, identify varying needs, and ensure that the AI solution is inclusive and accessible to a wide range of users. It promotes fairness, avoids discrimination, and improves overall user experience.
Does it build user confidence in AI?
Building user confidence in AI is crucial for adoption and continued usage. Designing an experience that gradually introduces and educates users about AI capabilities, while providing opportunities for feedback and learning, helps foster trust and acceptance.
Is there a plan to iterate on the solution and adapt to changing user expectations and needs over time?
Recognizing that the relationship between humans and AI will evolve over time is essential. Having a plan for continuous improvement, user testing, and adapting the AI solution ensures that it remains relevant, effective, and aligned with evolving user expectations and needs.