TTC Labs - Spotlight Interview: Wan Sie Lee, Director for Trusted AI, IMDA
News

Spotlight Interview: Wan Sie Lee, Director for Trusted AI, IMDA

Peter Tanham

Peter Tanham

Data Policy Manager, Meta & TTC Labs

AI Governance: Building Trust in AI

When it comes to AI Governance, the learning curve can be steep. Not only is the concept relatively new, but it also incorporates several practices. Good governance is critical in building accountability into AI, helping organisations manage risk, regulatory compliance and ethical usage as they develop new technologies.

To help us understand this world of AI Governance, this month’s expert spotlight interview is with Wan Sie Lee, AI and Data Innovation Director at Singapore’s IMDA (Info-Communications Media Development Authority). Since joining the statutory board under the Singapore Ministry of Communications and Information in 2017, she’s seen more and more users become concerned about AI decision-making.

Wan Sie Lee Spotlight Interview

Learning by Design

One of Lee's responsibilities is to help ease those concerns through AI Governance. She's worked tirelessly to grow a trusted AI ecosystem in Singapore, working closely with industry and government partners to promote and regulate data protection. Lee believes there's still work to be done to safeguard users' interests. “AI is still in its infancy, so there are still blind spots,” she says. “For example, there can be some unintended bias and discrimination in things like hiring algorithms. That’s why governance is so important in keeping users protected.”

While AI brings great opportunities to businesses and governments worldwide, it comes with increased responsibility. Some systematic and repeatable errors can create unfair outcomes for users, and there's an expectation that those who create AI should be accountable for the systems' impact on society.

As organisations scale their use of AI, concerns around protection and legality are being raised. While Lee recognises there isn’t a one-size-fits-all approach, she firmly believes that establishing a transparent governance structure can help build confidence in AI. “‘How much do you need to do to build trust?’ It’s an important question. If your AI isn’t trustworthy, then users lose faith in your organisation,” she says.

“Sometimes, it’s not about presenting a lot of detail but enough data points that clearly show how a decision was reached. Explainability is vital and can be considered in various ways, depending on the requirements of your organisation and industry."

TTC AIX Explainer CMS Exports-06

Guiding Responsible Practice

Lee and her team regularly speak to industry partners to gather research and information on how to best achieve Responsible AI across the board. "We're striving for consistency and transparency in everything we do," she says.

That approach is plain to see in the recently published Model AI Governance Framework. This comprehensive document provides readily implementable practices, covering four key areas:

1. Internal Governance Structures and Measures

2. Determining The Level Of Human Involvement In AI -augmented Decision-making

3. Operations Management

4. Stakeholder Interaction and Communication

The idea is to not only establish consistency with AI practitioners but to address key ethical and governance issues when deploying AI solutions.

Thorough guidance like this has been IMDA’s signature. But is it possible to find a balance between supporting innovation, remaining business-friendly, and protecting users through AI Governance?

“There's probably a sweet spot somewhere. It’s paramount that innovation continues, but we must bring awareness to organisations on building Responsible AI,” says Lee.

“We have a baseline Personal Data Protection Act. We often ask ourselves: ‘where do we need to pass legislation so that we’re at a hygiene-level of capability?’ We work closely with sector regulators to introduce guidance and suggest voluntary tools and mechanisms. The idea is to help businesses make better, more responsible decisions when it comes to AI,” she says.

Shaping Future Conversations

Not all responsible practices for AI are being led by the Government. Some organisations are attempting to get ahead of the curve. “I’ve been impressed by the start-ups and tech companies that share tools to check for bias and discrimination,” says Lee.

“We need the community to help us drive this forward. These companies, as practitioners, tend to be the ones that give us feedback and we really appreciate that.”

One example of this open dialogue came from IMDA’s and Meta's partnership. Both teams worked closely alongside several start-ups from Singapore to prototype and test concepts focused on AI Explainability and improved usability.

“This work became useful examples we could incorporate into our guidance documents,” says Lee.

“It’s important that we collaborate to build consistent guidelines and standards that instil trust in AI. Because governance is massive, if you don’t do AI properly, it might negatively impact society somehow."

“We’re determined to build harmony between IMDA, organisations of all sizes, and importantly the users of AI. When it comes to Responsible AI, we need to work together.”

You can learn more about the IMDA here and read our combined report on People-centric approaches to algorithmic explainability here.

Peter Tanham

Peter Tanham

Data Policy Manager, Meta & TTC Labs

Peter is a Data Policy Manager at Meta and TTC Labs, based in Dublin. Before joining Meta he ran an analytics company and worked on transparency advocacy and political campaigning.

TTC Labs is a cross-industry effort to create innovative design solutions that put people in control of their privacy.

Initiated and supported by Meta, and built on collaboration, the movement has grown to include hundreds of organisations, including major global businesses, startups, civic organisations and academic institutions.