Is ChatGPT an Actual Threat to Human Translators?
The Generative Pre-training Transformer chat —commonly known as ChatGPT— is an artificial intelligence chatbot that allows users to ask questions using conversational or natural language. Developed by the company OpenAI, ChatGPT was released in November, 2022, with the aim of generating human-like text that may be used for a variety of applications, such as automated content creation.
In the face of such advances in artificial intelligence, many professionals and organizations are asking themselves: is this really a tool that came to replace flesh-and-blood translators?
Model Limitations
Language models produce text based on the probability of a word occurring based on the previous words appearing in the sequence. By being trained on about 45 terabytes of text from the Internet, the GPT-3 language model used by ChatGPT is able to detect that some sequences of words are more likely to occur than others.
The increasing popularity of this software has brought about criticism and distrust in different areas. These concerns appear because it is almost impossible to distinguish human writing from ChatGPT-generated texts.
ChatGPT is estimated to have reached 100 million monthly active users in January, 2023, just two months after launch, making it the fastest-growing consumer application in history.
Source: Reuters
The issue is that although the model has many strong traits, it also has some weaknesses. It can add together two-digit numbers with complete accuracy, but when it comes to multiplying two-digit numbers, it produces the right answer only about 30% of the time.
Like other large language models, the chatbot prototype often responds with inaccurate or misleading information, and it has trouble when dealing with language issues such as consistency, referential expressions and coherence.
In fact, when ChatGPT gets asked whether it is a reliable source of information, its honest reply is that “it is not recommended to rely on ChatGPT as a sole source of factual information. Instead, it should be used as a tool to generate text or complete language-based tasks and any information provided by the model should be verified with credible sources.”
A Hybrid Approach
Beyond the capability of giving correct or incorrect answers on different topics, when it comes to translation, ChatGPT has some difficulties to deal with collocations, jargons and acronyms, to name a few. Also, it cannot decipher different languages in one message: even though the software admits 95 different languages and 12 coding languages, the platform can get confused by the use of more than one language in a single message.
Initial research has begun to understand ChatGPT’s translation capabilities and scope. Some experts are confident in that, as long as it is complemented with an exhaustive revision made by human professionals, artificial intelligence may achieve a performance comparable to professional human translations into popular languages that have multiple resources available, but it may show poor outcomes in lesser-known languages.
Even though ChatGPT contributions are highly positive in many industries and activities, the limitations of this type of technology are evidenced again when it comes to tasks requiring common sense, emotional intelligence, context, profound socio-cultural knowledge, and even grammatical understanding.
Therefore, accurate and quality translations still depend on experienced human translators. Technology is a valuable and fundamental ally, but it cannot remove the human factor from the loop. As it occurs with other artificial intelligence tools, a hybrid and synergic approach between human and machine appears to be the best way to go.