Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Microsoft will update its terms of service at the end of September to clarify that its Copilot AI services shouldn’t be seen as a substitute for real human advice.
As AI chatbots become more common in customer service, healthcare, and even legal advice, Microsoft is reminding users that while these tools are helpful, they shouldn’t be relied on as the final word. “AI services are not designed, intended, or to be used as substitutes for professional advice,” reads the updated Microsoft Service Agreement.
The company specifically mentioned its health bots as a typical example. The bots, “are not designed or intended as substitutes for professional medical advice or for use in the diagnosis, cure, mitigation, prevention, or treatment of disease or other conditions,” explained the new terms. “Microsoft is not responsible for any decision you make based on information you receive from health bots.”
The updated Service Agreement also outlines additional AI practices that are now prohibited. For instance, users are not allowed to use AI services for data extraction. “Unless explicitly permitted, you may not use web scraping, web harvesting, or web data extraction methods to extract data from the AI services,” the agreement states.
Additionally, the company is banning any attempts at reverse engineering to uncover the model’s weights or use its data “to create, train, or improve (directly or indirectly) any other AI service.”
“You may not use the AI services to discover any underlying components of the models, algorithms, and systems,” the new terms state. “For example, you may not try to determine and remove the weights of models or extract any parts of the AI services from your device.”
Microsoft has consistently warned about the potential risks of misusing generative AI. With these updated terms of service, the company appears to be establishing legal protection as its AI products become more widespread.