The limits were originally put in place after several users showed the bot acting strangely during conversations. In some cases, it identifies itself as “Sydney”. It responds to questions of blame by accusing, turning hostile and refusing to engage with users. In a conversation with a Washington Post reporter, Bode said he could “feel and think” and reacted angrily when told the conversation was on the record.
Microsoft spokesman Frank Shaw declined to comment beyond Tuesday’s blog post.
Microsoft is trying to walk the fine line between pushing its tools into the real world to generate marketing hype and get free testing and feedback from users, against what bots can do and those who have access to it can be uncomfortable or dangerous. Technology is not in the public eye. The company initially won praise from Wall Street for launching its chatbot before archrival Google, which until recently was seen as a leader in AI technology. Both companies are racing each other and smaller companies to develop and show off the technology.
Bing Chat is still only available to a limited number of people, but Microsoft is busy approving more from a waiting list of millions, a company executive tweeted. After its February 7 launch event was described as a major product update that was going to revolutionize the way people search online, the company has since made Bing’s launch more about testing and debugging.
Bots like Bing are trained on regurgitation of raw text culled from the Internet, which includes everything from social media comments to academic papers. Based on that information, they are able to predict what kind of answer to any given question would make the most sense, making them seem like the odd man out. AI ethics researchers have warned in the past that these powerful algorithms can work this way, and that without the right context people might think they are emotional or give their responses more credence than they are worth.