As the applications for AI continue to develop, so too do the risks.

Without laws keeping up, it falls to technology companies to moderate how the technology is used.

Keen to be on the right side of history, Microsoft has updated its guidelines for treating AI carefully and will retire a range of AI features from its Azure Face service that’s used with things like ID checks, contact-free access control and privacy face blurring.

The capabilities can be used to infer emotional states and identity attributes such as gender, age, smile, facial hair, hair and makeup.

Citing the lack of scientific consensus on the definition of ‘emotions’, the company said there are issues in the way inferences generalise across use cases, regions and demographics.

There are also heightened privacy concerns around this type of capability.

“[We] recognise that for AI systems to be trustworthy, they need to be appropriate solutions to the problems they are designed to solve,” Microsoft said in a statement.

Limits on voice mimicking too

Microsoft has systematically analysed all its AI systems that purport to infer people’s emotional states, whether the systems use facial analysis or any other AI technology.

Across these other systems, the company will make system-specific validity assessments at the outset and be guided by its use-case policy for guidance in high-impact use cases, grounded in science, the company explained.

It will apply limits to Azure AI’s neural voice speech technology that enables the creation of a synthetic voice that sounds nearly identical to the original source, unlike Amazon which is developing a similar feature for its voice assistant Alexa.

Recognising that the technology could also be used to impersonate speakers and deceive listeners, the company is restricting customer access to the service, defining acceptable use cases, and has established technical guardrails to help ensure the active participation of the speaker when creating a synthetic voice.

Its speech-to-text technology has also been found to be problematic.

In 2020, a Stanford study showed how the technology produced error rates for members of some African American communities that were nearly double those for non-African American users.

After the study was published, Microsoft learnt in pre-release testing that its speech-to-text technology failed to adequately account for diversity of speech across people with different backgrounds and from different regions.

Following a review by a sociolinguist, it set about expanding its data collection efforts while also grappling with how best to collect data from communities in a way that engages them appropriately and respectfully.

An evolving set of guidelines for AI

In these initiatives, Microsoft has been guided by its Responsible AI Standard, the second iteration of its policy that’s intended to limit product development towards both beneficial and equitable outcomes.

“That means keeping people and their goals at the center of system design decisions and respecting enduring values like fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability,” the company said.

A multidisciplinary group of researchers, engineers and policy experts spent 12 months developing the second version of the policy.

The standard defines outcomes that teams developing AI systems must strive to secure.

It breaks down a broad principle like ‘accountability’ into its key enablers, such as impact assessments, data governance and human oversight, with steps teams must take to ensure AI systems meet the goals throughout the system lifecycle.

It also maps available tools and practices to specific requirements to aid teams developing AI-related tools.

While the company is taking its own action, it says wider regulation is lacking, with laws lagging behind the technology that have not caught up with AI’s unique risks and society’s needs for fairness and inclusion in this kind of technology.

“While we see signs that government action on AI is expanding, we also recognize our responsibility to act. We believe that we need to work towards ensuring AI systems are responsible by design.”