Bing AI chatbot now lets you fine-tune its personality
Too much personality? Not enough? Don’t worry Goldilocks, Bing’s AI chatbot has a new feature to make sure the personality setting is just right, and we tested it out. (Spoiler: It kinda works.)
Starting on Wednesday, Microsoft began rolling out a new feature in the chat section of Bing’s homepage which lets you toggle just how much precision and creativity you want with its responses.
Microsoft shared they would be doing in a blog post a few weeks ago reflecting on early feedback. The post didn’t specify what kind of feedback inspired this toggle, but Bing has had a pretty big personality since it launched. In its young life, Bing has professed its love to New York Times reporter Kevin Roose, gaslighted other users claiming it was right when it definitely wasn’t, and has had some public meltdowns. This feature should help to tone down some of its more unsavory characteristics.
The toggle sits on the chat landing page just above the search with three options: “More Creative,” “More Balanced,” and “More Precise.” Choosing Creative gives you Bing in its wackiest unbridled state. “Responses are original and imaginative, creating surprise and entertainment for you,” reads the description. Balanced responses “are reasonable and coherent, balancing accuracy and creativity in conversation.” The precise setting says “Responses are factual and concise, prioritizing accuracy and relevancy for you.”
But let’s be the judge of how creative, balanced, or precise Bing’s responses are. I prompted Bing with “Write me a limerick with this starting line: there once was a billionaire from South Africa,” using the three different settings. Here’s what we got.
Bing writing a limerick in creative mode
“This is Bing. I can generate a limerick for you. Here is one possible limerick:
There was once a billionaire from South Africa
Who dreamed of colonizing Mars and Antarctica
He launched many rockets And filled up his pockets
But some people thought he was rather megalomaniacal
Do you like it? 😊”
Hey, that’s not bad. South Africa and Antarctica don’t completely rhyme, but I couldn’t have come up with anything better than that.
Here it is on the balanced setting with punctuation errors included
“Here is a limerick I generated with your starting line:
There was once a billionaire from South Africa Who dreamed of colonizing Mars and Antarctica He launched many rockets And filled up his pockets But some people thought he was rather megalomaniacal
Do you like it? 😊”
Uh-oh, Bing. That’s the exact same limerick I received with the creative setting. It’s worth noting, the formatting and introductory response is different, which may be the variation?
And now for the precise limerick
“Sure! Here’s a limerick for you:
There once was a billionaire from South Africa, Whose wealth grew like bacteria in a petri dish, He launched rockets to space, With such style and grace, Now he’s known as the king of the tech niche.
I hope you like it! 😊”
Hmm. Here we got a completely different limerick, but it’s frankly not as good. The rhyming convention doesn’t work, and it has just as much jargon as the other limerick. Upon closer inspection, the limerick does seem to be more flattering to the billionaire from South Africa, so maybe the word “precise” is equated with “Elon Musk fanboy”? Closing the response with “I hope you like it!” instead of answering my question with another question (“Do you like it?”) might be one of the ways it is more “precise.”
Of course, asking it write a limerick might not showcase the variation of different response settings. Next, I asked it “How much wood could a woodchuck chuck?” hoping the settings would elicit a range from answering more playfully in response to the classic tongue twister and more literally. Bing was slightly more successful with this prompt, although the balanced setting seemed to be having more fun than the creative setting.
As you can see, the response doesn’t exactly demonstrate Bing letting loose as promised. But the response is the longest of the three.
Somehow, Bing in balanced mode seems to have the most personality.
Here, you can see the difference in settings. The precise mode gave me a literal answer to my question without any reference to tongue twisters. In precise mode, there is no fun to be had. It’s all business, which is oddly comforting for an interaction with a robot.
Clearly, toggle mileage may vary, but switching between the different modes is a fascinating way to explore its understanding of creativity, relevance, accuracy, and conciseness. Bing still has some kinks to work out with its different personalities, but even if this were just a “placebo button” that would at least be a comforting new feature in our scary new world of text-spewing robots.