Popular communications platform Slack has defended its use of customer data to develop some of its artificial intelligence (AI) and machine learning (ML) models, following an outcry from some users.

The issue stemmed from the chat app’s AI privacy principles, which until this week stated that its systems “analyse customer data (e.g. messages, content and files)” and usage data to develop the company’s range of AI models.

The principles also state that organisations must email Slack if they wish to opt out of having their customer data used by these models.

Slack users — some of whom pay for the Salesforce-owned service — voiced their concern about the policy on social media last week, with some confused about whether their messages were being used to train AI, or unsure why they had not been asked to opt in.

"Why are these kinda things opt out? And need to be discovered. We're literally discussing switching to Teams at my company (1,500 employees)," one wrote on a forum.

"When we send our notice, we are going to be sending a notice that we want none of our data used for any ML training from Slack or anyone else," said another.

Slack has since updated its privacy principles and clarified that it does not use customer data to train any of its generative AI systems, which are complex platforms that can summarise conversations and run AI-powered search.

However, the company says it does use customer data to train non-generative models for features such as emoji suggestions, basic search, recommendations and autocompleting text.

“We do not build or train these models in such a way that they could learn, memorise, or be able to reproduce any customer data of any kind,” the company said.

“While customers can opt-out, these models make the product experience better for users without the risk of their data ever being shared.”

Slack says its add-on generative product Slack AI uses third-party Large Language Models (LLMs) which are not developed using data from its customers.

‘Customer data belongs to the customer’

In a statement on social media platform X, Slack said it agreed with customer feedback that it “could have better explained” its approach to using customer data to train some of its systems.

“We believe strongly that customer data belongs to the customer,” the company said.

“We do not develop LLMs or other generative models using customer data. No customer data is used to train third-party LLM models.”

Slack’s AI privacy principles also state that customer data “will not leak across workspaces”, and the company says it has technical controls which prevent it from accessing the underlying content of that data.

Organisations which do opt out of AI training will still benefit from generative AI/ML models, Slack says.

In a blog post, the company admits that while customers who opt out will notice little change, “the greater number of customers who opt out, the worse these types of models tend to perform overall”.

Slack is far from the first organisation to use some of its customer data to train AI and ML systems without first asking users to opt in.

Users of OpenAI’s popular generative AI system ChatGPT must opt out if they do not want their data to be used to improve the model, even if they are using it for free or without an OpenAI account.