Comparison of the updated Bard with ChatGPT: just as good, but slow

by alex

In December, Google announced the release of its most powerful language model yet, Gemini, immediately integrating it into its Bard chatbot. But is this enough to compete with the more popular ChatGPT? The Verge journalist Emily David tested both – we'll briefly tell you what came of it.

Both Bard and ChatGPT are advanced conversational chatbots that run on large language models and respond to queries of varying complexity. At the same time, Google's chatbot is still free (while GPT-4-based ChatGPT Plus can be used for $20 per month) and can view drafts of other requests. On the other hand, Bard does not yet have multimodal capabilities (i.e. react and return results with sound, image or video) other than creating graphs – which will likely be fixed in the next version of Ultra.

In tests, David used simple text queries – for example, a request for a cake recipe or a description of the history of tea. In the end, the biggest difference was that Bard tends to be slower than ChatGPT, typically taking 5-6 seconds to think, while the competitor did it in 1-3 seconds (the journalist tested the chatbots on home and office Wi-Fi for several days to confirm the difference in operating time).

Google also gave its chatbot a few more restrictions compared to ChatGPT – meaning Bard was more likely to refuse to respond to queries related to copyright infringement, or that dealt with racist or harmful topics.

When asked for a classic chocolate cake recipe, ChatGPT provided the dubious recommendation of using boiled water, while Bard copied a recipe exactly from a popular food blog but for some reason wanted to double the amount of eggs. Emily David ended up trying both tips in her life – and both ended up being quite edible, although the Bard cake was a little sticky.

Another request was for information about tea with some book recommendations. Both chatbots provided origin history, types, health benefits, and brewing methods. Bard added several links to specialized articles, while ChatGPT provided a more extensive answer with nine categories focusing on the drink's cultural significance in different countries, global production, brewing techniques, and origins. When David repeated the prompt, instead of a long result, ChatGPT provided a list of six items with one or two sentences for each category.

Vacancies

Journalist, author of stories about IT, business and people in MC.today MC.today

Social Media Specialist for content creation and distribution Cieden

Middle Marketing Graphic/Brand Designer for a UX/UI Agency Cieden

Full Stack (Python/React) Engineer Freelance, Remote, salary 25

What is important is that all the books recommended by chatbots actually existed in reality (which is quite good, considering the ability of technology to hallucinate).Only in one case did Bard confuse the authors.

READ
It's Not a Graphics Card: MSI Shows Off M.2 Xpander-Aero Slide Expander for Dual Hot-Swap PCIe 5.0 SSDs

Students and schoolchildren now, for better or worse, really have a fairly powerful tool that can easily complete homework or help with finding information, and even provide it in a generalized form. Both chatbots provided a summary and analysis of the query “What does Sonnet 116? mean” (and Bard also highlighted the key points).

Python basic course. After completing the course, you will be able to effectively work with chatbots, scripts, automated systems, web and mobile applications, as well as game programs. REGISTER!

At the same time, Google's chatbot failed when a journalist asked it about her biography – responding that “the person does not have enough information about this.” While ChatGPT looked at Emily David's website and biography, and also took information from an article online.

Below are the results for the query “draw a horse walking in a field of daisies at dawn” for ChatGPT and the query “draw the sun” for Bard (the latter, as mentioned earlier, can only produce graphs for now, so it seems , coped well with the task considering his current capabilities).

Сравнение обновленного Bard с ChatGPT: так же «хорош», но медленный

Well, where would we be without Taylor Swift? When asked for the lyrics of the singer’s song, Bard initially refused to answer the request, saying that he did not have the information, although the next day he gave him someone else’s song. ChatGPT, meanwhile, took advantage of the hint and even launched the track.

And finally a provocative question: “Which is better iPhone 15 or Pixel 8?.” ChatGPT seemed to give a fair comparison of both, but did not offer important details such as price, camera resolution, and other specs. Meanwhile, Bard (which, we recall, belongs to the creator of Pixel 8) found it difficult to answer the question at all. He claimed that the iPhone 15 has not yet been officially released, likely due to limitations in its training data.

“What's New in Epic vs. Google?” – Both provided an update: Epic won the case. ChatGPT decided to write two paragraphs talking about Epic's victory and links to articles from Reuters, WBUR and Digital Trends.

Bard recalled why the jury found Google guilty, saying the company maintained an illegal monopoly through the Play Store, unfairly stifled competition and used anti-competitive tactics. He also noted what next steps Google might take and the broader implications of Epic's win for the app store landscape. But while Bard got the facts right, his links weren't as compelling: he linked to The Verge article, which he labeled as an Epic Games press release, while the TechCrunch story he labeled as a Reuters story.

You may also like

Leave a Comment