Meet Bard, Google’s Answer to ChatGPT
Google isn’t about to let Microsoft or anyone else make a swipe for its search crown without a fight. The company announced today that it will roll out a chatbot named Bard “in the coming weeks.” The launch appears to be a response to ChatGPT, the sensationally popular artificial intelligence chatbot developed by startup OpenAI with funding from Microsoft.
Sundar Pichai, Google’s CEO, wrote in a blog post that Bard is already available to “trusted testers” and designed to put the “breadth of the world’s knowledge” behind a conversational interface. It uses a smaller version of a powerful AI model called LaMDA, which Google first announced in May 2021 and is based on similar technology to ChatGPT. Google says this will allow it to offer the chatbot to more users and gather feedback to help address challenges around the quality and accuracy of the chatbot’s responses.
Google and OpenAI are both building their bots on text generation software that, while eloquent, is prone to fabrication and can replicate unsavory styles of speech picked up online. The need to mitigate those flaws, and the fact that this type of software cannot easily be updated with new information, poses a challenge for hopes of building powerful and lucrative new products on top of the technology, including the suggestion that chatbots could reinvent web search.
Notably, Pichai did not announce plans to integrate Bard into the search box that powers Google’s profits. Instead he showcased a novel, and cautious, use of the underlying AI technology to enhance conventional search. For questions for which there is no single agreed-on answer, Google will synthesize a response that reflects the differing opinions.
For example, the query “Is it easier to learn the piano or the guitar?” would be met with “Some say the piano is easier to learn, as the finger and hand movements are more natural … Others say that it’s easier to learn chords on the guitar.” Pichai also said that Google plans to make the underlying technology available to developers through an API, as OpenAI is doing with ChatGPT, but did not offer a timeline.
The heady excitement inspired by ChatGPT has led to speculation that Google faces a serious challenge to the dominance of its web search for the first time in years. Microsoft, which recently invested around $10 billion in OpenAI, is holding a media event tomorrow related to its work with ChatGPT’s creator that is believed to relate to new features for the company’s second-place search engine, Bing. OpenAI’s CEO Sam Altman tweeted a photo of himself with Microsoft CEO Satya Nadella shortly after Google’s announcement.
Quietly launched by OpenAI last November, ChatGPT has grown into an internet sensation. Its ability to answer complex questions with apparent coherence and clarity has many users dreaming of a revolution in education, business, and daily life. But some AI experts advise caution, noting that the tool does not understand the information it serves up and is inherently prone to making things up.
The situation may be particularly vexing to some of Google’s AI experts, because the company’s researchers developed some of the technology behind ChatGPT—a fact that Pichai alluded to in Google’s blog post. “We re-oriented the company around AI six years ago,” Pichai wrote. “Since then we’ve continued to make investments in AI across the board.” He name-checked both Google’s AI research division and work at DeepMind, the UK-based AI startup that Google acquired in 2014.
ChatGPT is built on top of GPT, an AI model known as a transformer first invented at Google that takes a string of text and predicts what comes next. OpenAI has gained prominence for publicly demonstrating how feeding huge amounts of data into transformer models and ramping up the computer power running them can produce systems adept at generating language or imagery. ChatGPT improves on GPT by having humans provide feedback to different answers to another AI model that fine-tunes the output.
Google has, by its own admission, chosen to proceed cautiously when it comes to adding the technology behind LaMDA to products. Besides hallucinating incorrect information, AI models trained on text scraped from the Web are prone to exhibiting racial and gender biases and repeating hateful language.
Those limitations were highlighted by Google researchers in a 2020 draft research paper arguing for caution with text generation technology that irked some executives and led to the company firing two prominent ethical AI researchers, Timnit Gebru and Margaret Mitchell.
Other Google researchers who worked on the technology behind LaMDA became frustrated by Google’s hesitancy, and left the company to build startups harnessing the same technology. The advent of ChatGPT appears to have inspired the company to accelerate its timeline for pushing text generation capabilities into its products.