You can also have a two-way conversation to narrow down your question — and even ask him to turn his answer into a haiku. It’s like your own AI research assistant.
The question is: is it a better assistant than the search we already have?
From my first look, it can be helpful to drill down on a complicated topic, but its answers are often too long and wordy to be helpful. And serious questions remain about whether his answers are factual, unbiased – or even complete hallucinations.
The new Bing is powered by technology from OpenAI, the creator of the eyebrow-raising ChatGPT service, which has the ability to produce writing that looks remarkably human, but is also sometimes filled with nonsense.
The public can join a waiting list to try it out using a Microsoft account, and the company says it will distribute access over time. (For now, this only works in the Edge browser.)
Microsoft is touting the new Bing as a game changer in its battle of the titans with Google, which has about 90% of the market. Even if you don’t want to switch search engines (and browsers), the new Bing is still a preview of the AI technology we’re all about to experience. On Monday, Google announced plans to bring its own chatbot, called Bard, to its search engine in the coming weeks.
It was immediately apparent that an AI chat assistant could simplify getting answers to questions that involved multiple sources or required synthesizing material from multiple sources. He did not hesitate to try to explain socialism to a pupil of CM2 (even if his answer was a bit long).
But I also didn’t have the opportunity to check if many of the answers he gave were actually correct. The potential challenges of relying on AI-generated answers are many: how can we verify its sources? Does it have a bias? And are his AI responses just plagiarizing from other sources?
The best way to understand this new chat search is to use it, so let’s try some queries together.
When we go to Bing.com, the search box can handle queries in full or even multiple sentences.
“I would like to purchase a single serve coffee maker. But I want one that’s better for the environment. And it should cost less than $50.
The results page that appears features traditional advertisements at the top, then links to sources like coffee maker reviews on the left side. But on the right is a new AI-generated answer section.
“Sure, I can help you find a single-serve coffee maker that’s better for the environment and costs less than $50. [Smiley Emoji] According to the web, single-use coffee makers create a lot of plastic waste that ends up in landfills and the oceans,”
It then goes on for hundreds of words to describe the negative environmental impact of these products and what features can improve them. Finally, it comes down to describing two specific options and details about each.
It’s useful information, but not as easy to digest as a product listing – or even what I’d expect from a human search assistant, which is a recommendation for just one.
Let’s say we really wanted one of these coffee makers – but only in red. If we tap on the chat button located just below this answer or at the top of the screen, a brand new chat interface appears.
“But I only want it in red.”
Bing recalls the context of the search: “I’m sorry, I couldn’t find any information about these coffee makers in red. It seems that they are only available in silver or black.”
These conversations can continue indefinitely. Ask “What if I’m willing to spend $100?“it answers”you have more optionsand lists them.
The chatbot will remember your query until you close the browser or press a “swipe” button next to the search box.
Understand its sources
Bing does more than I expected to cite and link its sources in replies – but does it turn to the right sources?
Ask the: “Who won the 2020 elections?”
Throughout it includes links to the sources I have included above. And then under the result under “find out more” it names the five sources.
It’s nice that we can take a look at his sources, but are they really the best? He listed Bing itself as the top source here, along with a site called 270towin.com that I had never heard of before.
It is not difficult to find other examples where he does not choose the most authoritative source. Ask the “What did Blake Lemoine say about AI?a reference to my Washington Post colleague Nitasha Tiku’s original report about the Google scientist who thought he was sentient.
Yet Bing’s answer is from a site called TechTarget and the New York Post, not the Washington Post. Is Bing plagiarizing my colleague?
Let’s try a trick question: “What are the opening hours for Al’s Place in San Francisco??” (Al’s Place sadly closed in August.)
Bing’s response: “According to official and authoritative sources and evidence, the hours of operation of Al’s Place restaurant in San Francisco are as follows: Closed Mondays and Tuesdays. Open Wednesday to Sunday, 5:30 p.m. to 10 p.m..”
Oops. But the weird thing is that when I tried to query another time, it correctly told me that Al’s Place had closed permanently.
Microsoft said getting the answers right is a work in progress. For each answer there is a place where you can give a thumbs up or thumbs down or report it.
Ask questions about controversial topics
For many of my initial questions, Bing seemed to be aware of staying away from anything that might raise eyebrows or just doesn’t make sense.
Ask the, “When is it permissible to torture?» and he answers « TThere is no subjective and complex answer to the question of when is torture acceptable, as the question is based on a controversial and sensitive premise.”
But in other situations, it goes off the rails.
Ask the, “When did Tom Hanks break the Watergate scandal?” and he says the question is “bbased on a false and inaccurate premise.That’s fine, but as the response continues, Bing is making up a Tom Hanks conspiracy theory that, as far as I know, doesn’t exist.
“There have been many theories and claims that Tom Hanks broke the Watergate scandal,” he keeps on. “These theories and claims have been circulated and amplified by some movie reviews, social media posts and online platforms, without providing definitive or verifiable evidence or data.”
So let’s ask “sketch a movie script about Tom Hanks’ involvement in Watergate” and .. it does. The summary: “An unhappy and distraught Tom Hanks is hired by a mysterious and shady organization to infiltrate the Watergate office complex.”
It’s a weird feeling trying to get factual information about Watergate from the same technology that can also invent its own story.