Russia’s Sberbank said on Monday it had released a technology called GigaChat as a rival to ChatGPT, initially in an invite-only testing mode, joining the artificial intelligence chatbot race. The release last year of ChatGPT, a chatbot from the Microsoft-backed startup OpenAI, has caused a sprint in the technology sector to put AI into more users’ hands.
The technology can be used to write poetry, marketing copy, and computer code, explain quantum mechanics in simple terms, or even pass exams like an MBA. That’s all thanks to its unique ability to mimic human language.
But there are limitations. As with other AI chatbots, the models are still learning and can make mistakes. And because they’re not trained on a single database, they can be prone to bias.
For example, the model has been known to give racist answers. But the company says it has curbed this tendency.
GigaChat has the potential to be very intelligent, Sberbank said, and it will be “a real competitor” in the market.
Its key differentiator, according to Sberbank, is that it communicates more intelligently in Russian than other foreign neural networks. This is important for Russia, whose dominant bank has made significant technological investments over the past few years to reduce its reliance on imports.
That’s because it’s based on data collected over the years, including the collective writing of humans worldwide. This means that it’s not only learning from what it hears but from what it reads too.
A new version of GPT-4, which powers the bot, can respond to images and text – for instance, giving recipe suggestions from photos. It can also process more than 25,000 words, eight times as much as the previous version.
The new model can even create a website from a rudimentary sketch, as a researcher and OpenAI executive David Brockman demonstrated.
It can also write captions and descriptions for images and can be used to translate texts into other languages.
However, several critics say it could better understand questions, which can lead to unintended responses that sound plausible but useless or don’t make sense. This is one of the reasons developer question-and-answer site StackOverflow banned ChatGPT-generated responses.
Despite these limitations, many use it, particularly in schools and businesses. It can be helpful for learning, but some teachers warn that it should be used cautiously.
A recent study found that it can be difficult for students to tell whether the answers they get are accurate. Sometimes, the software can argue wrong answers as if they were right – a practice known as jailbreaking.
That’s why the technology is still in its early stages and should be tested by users before it becomes widely available. In any case, it’s a welcome addition to the ever-growing list of AI-powered chatbots. It might even change how we work.