New York City (NYC) Mayor Eric Adams is fending off criticism of the city's new artificial intelligence (AI) chatbot, which has been found in recent days to provide incorrect recommendations or responses to small business owners' inquiries that, if followed, would violate the law.
New York City's AI-powered chatbot has faced criticism for providing inaccurate advice to entrepreneurs, including suggestions that its advice may encourage them to bend local rules and could lead to breaking the law.
MyCity chatbot, launched in October as a resource hub for business owners, offers algorithmically generated responses to queries navigating the city's bureaucratic processes. Despite the backlash, the city has maintained the chatbot on its website. Mayor Adams defended the chatbot, acknowledging errors in some areas and stressing that it was a pilot project. Users are warned that the bot may deliver incorrect or dangerous information and that its comments are not legal advice.
The chatbot has provided inaccurate responses to inquiries, wrongly suggesting that employers could terminate employees who report sexual harassment or hide pregnancies. It has also contradicted the city's waste management policies by claiming that businesses have no composting requirements and can use black garbage bags for their trash.
Microsoft Azure AI, which powers the chatbot, is collaborating with NYC to evaluate the accuracy and consistency of the service with official policies and city guidelines. The revelations seem to be consistent with common generative AI issues. OpenAI's ChatGPT and others are known to sometimes 'hallucinate' and make false claims with confidence.