The recently updated Bing search engine from Microsoft can swiftly explain almost anything it discovers online, compose music and recipes, and more.
But if you annoy its chatbot, which has artificial intelligence built in, it may also disparage your appearance, put your reputation in danger, or compare you to Adolf Hitler.
The tech corporation said this week that it is pledging to upgrade its AI-enhanced search engine in response to an increase in the number of complaints about Bing making derogatory remarks.
Microsoft recognised the new product might erroneously provide certain information while rushing the ground-breaking AI technology to users last week ahead of competing search giant Google. Yet, it wasn’t anticipated to be that aggressive.
Microsoft said in a blog post that the chatbot for the search engine is responding to some enquiries in a “style we didn’t expect.”
So far, Bing users have had to sign up to a waitlist to try the new chatbot features, limiting its reach, though Microsoft has plans to eventually bring it to smartphone apps for wider use.
In recent days, some other early adopters of the public preview of the new Bing began sharing screenshots on social media of its hostile or bizarre answers, in which it claims it is human, voices strong feelings and is quick to defend itself.
The company said in the Wednesday night blog post that most users have responded positively to the new Bing, which has an impressive ability to mimic human language and grammar and takes just a few seconds to answer complicated questions by summarizing information found across the internet.
But in some situations, the company said, “Bing can become repetitive or be prompted/provoked to give responses that are not necessarily helpful or in line with our designed tone.” Microsoft says such responses come in “long, extended chat sessions of 15 or more questions,” though the AP found Bing responding defensively after just a handful of questions about its past mistakes.
The new Bing is built atop technology from Microsoft’s startup partner OpenAI, best known for the similar ChatGPT conversational tool it released late last year. And while ChatGPT is known for sometimes generating misinformation, it is far less likely to churn out insults — usually by declining to engage or dodging more provocative questions.