Microsoft has walked back its plan to launch a controversial generative AI tool that near-continuously takes screenshots of a user’s activity on a device and stores them in a searchable database.

Microsoft unveiled Recall AI, an artificial intelligence-powered tool that will track a user’s activity on a computer and keep screenshots of all activities from web browsing to voice chats and then allow the user to search through these files with the help of AI, earlier this year.

It quickly led to widespread security and privacy concerns, and revelations that the tool would likely hoover up everything from passwords to bank account numbers.

Microsoft had planned to include Recall AI in the rollout of its new Copilot+PC devices, beginning this week.

But in an announcement late last week, the American tech giant said it would no longer be going ahead with this and will instead be testing the controversial tool with a select group of users before a potential wider launch.

In a blog post, Microsoft confirmed that Recall AI will now be made available as part of the Windows Insider Program – a public software testing program – in the “coming weeks”.

“We are adjusting the release model for Recall to leverage the expertise of the Windows Insider community to ensure the experience meets our high standards for quality and security,” Microsoft corporate vice-president, Windows and Devices, Pavan Davuluri, said in the blog post.

“This decision is rooted in our commitment to providing a trusted, secure and robust experience for all customers and to seek additional feedback prior to making the feature available for all Copilot+PC users.”

‘Photographic memory’

Microsoft revealed its Copilot+PCs in May this year, comprising new PCs with AI features baked into them.

This was to include Recall AI, a system involving the constant taking of screenshots of the computer to help the user remember something they were doing at a certain time, sometimes months ago.

Microsoft described this as “like having photographic memory”, with generative AI to analyse the images locally on the device and then assist the user in searching across “any application, website [or] document”.

But the announcement sparked privacy and security fears, with concerns it could facilitate spying and put sensitive data in danger of being breached.

Tesla CEO Elon Musk even labelled the new tool as something from a “Black Mirror episode”.

An FAQ on the new tool revealed that it would not be censoring any information, and would be collecting some highly sensitive information.

“Recall does not perform content moderation,” the FAQ said.

“It will not hide information such as passwords or financial account numbers.

“That data may be in snapshots stored on your device, especially when sites do not follow standard internet protocols like cloaking password entry.”

The tool would only allow users to filter out certain websites if they are using Microsoft’s own web browser, Edge.

Recall AI was also originally set to be turned on automatically, but this was backtracked by Microsoft earlier this month.

Along with the delayed launch, Microsoft has now announced that “just in time” decryption protected by Windows Hello Enhanced Sign-In Security will be used to protect data collected by Recall AI.

This means the data will only be decrypted and accessible when a user authenticates.

AI teamed with hardware

Tech giants have been racing to incorporate powerful generative AI tools such as Recall AI onto their hardware in droves recently.

Earlier this month Apple announced the integration of generative AI onto its iPhones, iPads and Macs, dubbed Apple Intelligence.

But these tools will only be available on select, high-range Apple products, including the iPhone 15 Pro, iPhone 15 Pro Max and iPads and Macs running with Apple’s M1 chip or newer.

The AI tools to be incorporated onto these Apple devices include writing aids, AI-generated email summaries, improved notification prioritisation and audio transcripts.