How do you tell who’s human online? Today, it’s not always clear to users whether they’re interacting with an agent or AI. Chatbots are becoming less robotic – they’re smarter, more conversational, and more humanlike.

It raises an important question: can a lack of transparency in Conversational AI erode users’ trust? It’s something UX Designer Richard de Vries contemplates every day in his role at Philips, where he designs user journeys for chatbots and sleep and respiratory products.

“Transparency and explainability are everything. Firstly, users need to understand what happens to their data. And secondly, they need to know that they’re communicating with AI – not a real person,” says Richard.

A foundational element of Transparent and Explainable AI is the basic awareness that a person is interacting with AI. As Richard points out, sometimes it’s a fine line to tread.

Spotlight - Richard de Vries

Learning by Design 

“When a system becomes too human, affordance with the user can sometimes go out the window, ” he adds.

“When an end-user sees a webpage, a form, or a video, they know what they’re dealing with. But when the user starts to use a more natural interface, the line becomes blurred – ‘am I interacting with a human, machine, or algorithm?’ It can jolt the user’s experience.”

Richard’s journey with Transparent and Explainable AI began 15 years ago when he graduated with a BA in Interaction Design from Willem de Kooning Art Academy, a part of Rotterdam University of Applied Sciences. His bachelor thesis about natural user interfaces piqued his interest in Conversational AI. 

Conversational AI refers to chatbots or virtual agents facilitating real-time interactions with users. This technology uses large amounts of data, machine learning, and natural language processing to help imitate human interactions, recognise speech and text inputs and translate across languages.

Richard tapped into this knowledge at Philips, where he would work closely with algorithm developers to build chatbot user journeys. 

“When I joined the company, I started by answering basic design questions like ‘what kind of button do we use or how should this interface look?’. We quickly realised that the challenge here was more about building better conversations than where to put buttons,” he says.

Building Trust With Users

Richard and his team worked on understanding and generating user responses in a natural way. Conversational AI combines natural language processing (NLP) with machine learning. These NLP processes flow into a feedback loop with machine learning processes – the aim is to continuously improve the AI algorithms. 

Richard’s role grew from conversational designer to product owner. Right now, he works on Philip’s sleep and respiratory products, including ventilators, sleep apnoea devices, and therapy systems. Customers often have specific medical questions about these products and Conversational AI helps provide clear answers.

“At first, we were dealing with many static interfaces, but the problems we were hearing were very human ones. We needed a human interface, so we developed assistants to address those problems systematically,” says Richard. 

Ensuring transparency would prove to be key to delivering Trustworthy AI experiences.

“Because we were using algorithms to help users, we needed to be as transparent as possible in communicating what our interface was capable of and how their data was used,” he adds. 

“Anything said to the chatbot, would stay in the chatbot. We were explicit in communicating that throughout the interaction – not just in a long disclaimer at the end.”

Shaping Future Conversations

Richard and his team work through a wide range of customer cases to ensure users are provided with the right information and solutions needed for a seamless, transparent journey. But Richard believes there’s always room for improvement.

“If users fail to get the right answers from AI, that’s frustrating for them, so we’ve always got to keep pushing and seeing what’s possible,” he says. 

“When I look around, I see clearer language from bots – that’s positive and something we’ve tested. I expect that as AI becomes more effective and widespread, users will learn to be more trusting.”

When approaching Explainable AI, Richard is a firm believer in working closely with policymakers to protect people and drive product innovation.

“Policy provides the foundation to innovate and build better solutions. Because with stringent rules and regulations, which we’re beginning to see, you get a clear starting point to build secure, trustworthy products that give users peace of mind.”

To learn more about building Trustworthy AI experiences, catch Richard’s workshop and lightning talk at the TTC Summit. Register your place: https://summit.ttclabs.net