On Friday, Microsoft began putting limits on its well publicized Bing chatbot after the AI tool began creating incoherent chats that came out as aggressive or weird. After a formal debut earlier this month, when CEO Satya Nadella declared the AI system heralded a new chapter of human-machine interaction, the tech giant opened the system to a small number of public testers.
Those who tested the program this week discovered, however, that it was based on the widely used ChatGPT platform, and that it could easily deviate into the bizarre. Officials at Microsoft first explained the conduct by saying that the AI became confused during particularly lengthy conversations. The chatbot’s attempts to mimic its interlocutors’ voices resulted in occasional responses that Microsoft had not envisioned.
Due to these issues, Microsoft said late on Friday that it will begin limiting Bing conversations to 50 per day, or five questions and answers each session. Each session ends with the user clicking a “broom” symbol to reset the artificial intelligence and begin again. As opposed to before, when a user started a dialogue with the AI system, it would go on for hours, now the AI will end the conversation.
OpenAI, a software startup based in San Francisco, created the chatbot using a kind of artificial intelligence systems called big language models, which were taught to mimic human speech by analyzing hundreds of billions of words from the internet. The degree to which such systems may possess self-awareness is a topic of increasing discussion because to their ability to generate word patterns that are eerily similar to human speech. Nevertheless, the tools tend to severely underperform when asked to create factual information or handle simple arithmetic, since they were only designed to anticipate which words should follow next in a phrase.