Who Is Better? Understanding the debate between ChatGPT and Bing chat

  • Facebook
  • Twitter
  • Reddit
  • Flipboard
  • Email
  • WhatsApp
Who Is Better? Understanding the debate between ChatGPT and Bing chat (Image: bing.com/openai.com)
Who Is Better? Understanding the debate between ChatGPT and Bing chat (Image: bing.com/openai.com)

Delhi : The expectation when Microsoft released Bing Talk into the world was that the chatbot would be very similar to, if not identical to, ChatGPT. It did make sense because Microsoft had just in January of this year announced a "multiyear, multimillion dollar" contract with OpenAI. Bing Chat and ChatGPT, however, turned out to be two wholly separate organizations. It was claimed that ChatGPT was too careful, avoiding any inquiries that even faintly alluded to divisive figures like Trump. Sydney, or Bing Chat as it kept referring to itself, was more personable.

Why does Bing Chat differ so much from ChatGPT?

Bing seems more advanced for a reason. Bing Chat has been trained using a model that Microsoft has referred to as "a new, next-generation OpenAI big language model" that is more sophisticated than ChatGPT and also integrated with Bing search, whereas ChatGPT was refined from a model in the GPT-3.5 series. (This integrated model is referred to as Prometheus within the organisation.)

Although while Microsoft hasn't stated the model used for training directly, there are cues that might indicate it might be GPT-4. First off, the long-awaited successor to GPT-3, GPT-4, seems disturbingly close to becoming on sale. The release of the model is still "up in the air," according to OpenAI CEO Sam Altman, but according to a report by The New York Times, it should happen in the first half of this year.

The model powering Bing Chat has a lower latency than ChatGPT, which also suggests that it is likely GPT-4 or a new iteration of it. Sydney clearly sounds nothing like ChatGPT and more like a member of the GPT family, whose replies are more spontaneous and organic. When a conversation drags on, Sydney also has a propensity to start sounding exactly as monotonous as GPT models.

In addition, despite their alliance, OpenAI and Microsoft don't have a strong relationship; they continue to operate as separate companies, much as Google Brain and DeepMind. 2020 will see the licencing of GPT-3, the company's flagship Technology, by Microsoft and OpenAI. But, in order to prevent a mess in the infrastructure, they exchange datasets as little as possible.

The possibility of a conflict of interest between the two businesses is a significant one, and the Satya Nadella-led organisation just issued a warning to staff urging them not to ever use ChatGPT.

The fact that ChatGPT spent a lot more on its datasets contributed to its success and relative ability to avoid controversy. According to an exclusive TIME article from the end of January, OpenAI has hired Sama, a San Francisco company with offices in Kenya, to handle the data labelling for ChatGPT. In accordance with the report, data labelers were required to read and annotate explicit material on child sex abuse, bestiality, homicide, suicide, and torture.

Supervised learning vs. RLHF?

At the release of ChatGPT, OpenAI produced a blog that was firmly dedicated to safety. In truth, it claimed that the business had learned from the implementation of its prior models, such as GPT-3 and Codex. Reinforcement Learning from Human Feedback (RLHF) and supervised learning were both used in the training of ChatGPT by OpenAI. According to the firm, the use of RLHF has resulted in "significant decreases in detrimental and untruthful outputs."

Microsoft was compelled to make Bing Chat available in order to compete with Google's Bard chatbot. The two and a half months Microsoft had before releasing Sydney were undoubtedly insufficient to replicate and integrate the whole RLHF workflow. With the launching of Bing Talk, the lack of pre-training caused by the hurried creation became obvious in its ridiculous comments. The way ChatGPT and Bing conversation operate differently affects the results of RLHF vs supervised learning alone.

The method used to train OpenAI's ,which was published in February of last year, was the basis for ChatGPT's training. To evaluate the efficacy of RLHF and supervised learning, the observations from the Instruct GPT release are crucial. GPT-3 underperformed the InstructGPT models in terms of accuracy. While biassed, Instruct GPT's replies weren't as harmful as GPT-3's. The dataset labels under RHLF significantly favoured Instruct GPT outputs over GPT-3 outputs.

Hence, even though ChatGPT wasn't as fluid as Microsoft's Bing Chat, all these results can be observed leaking into it, showing that RLHF may actually be superior qualitatively than supervised learning.

The two businesses' separation also distinguishes how their goods have been introduced. While Microsoft is obviously under enormous pressure to make Bing Chat and Search a success story, OpenAI published ChatGPT without anticipating its future popularity and amount of usage.

Predictably, days following its erratic behaviour, Microsoft had to impose a conversation restriction on its Bing AI. Five questions per session and a daily cap of 50 queries will now apply to Bing conversations.

Because of its intention to engage in a conflict with Google and the significant amount of advertising money at risk, Microsoft is under existential threat. Despite this, it seems doubtful that Bing Chat will be shut down given the bitter rivalry between the two enormous firms.