AI Hallucination – A Controversy Over a Plane Ticket

AI-Hallucination-A-Controversy-Over-a-Plane-Ticket-0
AI Hallucination - A Controversy Over a Plane Ticket
5 minutes read
1072
5/5
Share this on your:
Share on facebook
Share on twitter
Share on linkedin

With Sora, OpenAI has once again put generative AI in the global spotlight, but this time the focus is on text-to-video (TTV) models instead of large language models like GPT. Just a year ago when GPT was all the rage, TTV models also caused quite a stir, with one of the most famous examples being a video of Will Smith eating Spaghetti generated by a TTV model available on the open-source platform ModelScope. While this video sparked widespread discussions about the future of TTV development, its absurd and surreal visuals actually made it a source of amusement for many mocking AI. However, in less than a year, OpenAI has turned the tables with Sora. This time, we hear more concerns about the speed of AI development and the prospect of human replacement. So, is AI truly unstoppable now?

The exquisite quality of sample Sora clips from OpenAI has left people in awe.
The exquisite quality of sample Sora clips from OpenAI has left people in awe

The dispute over a plane ticket

According to media reports, Air Canada was recently ordered by a court to compensate a passenger named Jake for his ticket costs. The reason was that the AI customer service bot used by Air Canada provided Jake with incorrect purchasing information, leading him to make a wrong decision and incur unnecessary financial losses.

In 2022, Jake was preparing to purchase a ticket from Air Canada to attend his grandmother's funeral. As many airlines offer different discounts for bereavement travelers, Jake also tried to confirm the information through the customer service provided on Air Canada's official website before buying the ticket. At that time, Air Canada had already deployed AI chatbots on its website, optimized through machine learning. Jake immediately received a response from the AI bot, clearly informing him that he could purchase the ticket directly and apply for a refund of the price difference within 90 days after issuance. However, when Jake applied before the 90-day deadline, he was informed by the airline that they could not process it.

Originally, Air Canada's policy for bereavement discounts required the purchaser to request the discount before traveling, and since Jake had already completed his trip, he was clearly not eligible. When Jake argued that Air Canada should be held responsible for the misinformation provided by their AI chatbots, Air Canada pointed out that although the AI's response in the chat was incorrect, it also sent the link to the information about the discount policy. If Jake had carefully reviewed the policy content at that time for confirmation, he could have avoided making the wrong decision. Frustrated, Jake decided to take Air Canada to court. After a trial, the court ultimately ruled that Air Canada should be responsible for the misinformation provided by their AI chatbots, and ordered compensation for Jake. Following the resolution of this dispute, Air Canada quickly removed their AI customer service and reverted back to their previous human customer service system based on semi-automated FAQ responses.

The troublemaker — AI hallucination

This AI chatbot controversy has come to an end, but it has once again sparked discussions around AI. In fact, almost all users of generative AI, especially large language models (LLM), often encounter situations where the AI “seriously talks nonsense,” which is a common phenomenon in the field of artificial intelligence: hallucination, referring to content generated by AI that is meaningless or unreliable relative to the input.

The reasons for AI hallucinations are varied, with two main ones:

  • Data discrepancy: This is because most AI training requires a very large volume of data training sets, and these datasets often contain a large number of contradictory or inconsistent data and descriptions on the same topic. This issue in the data source is an important reason for the instability of the output results.
  • Training-induced hallucinations: Even when the data used for training is consistent, during the training process of large models, different training methods or random factors that occur during training could also lead to hallucinations in results.

Although the phenomenon of AI hallucinations is now well known, scientists still have not fully unraveled this puzzle. Major AI companies are doing everything they can to research and develop ways to reduce and mitigate the occurrence of hallucinations.

The underestimated AI hallucinations

When we see AI generating nonsense text or images, we often just laugh it off. However, the real trouble and danger start to approach when we cannot distinguish the authenticity of AI output. With the AI boom, various companies are flexing their muscles, and all kinds of AI-integrated services and applications are emerging. The presence of AI is starting to be felt in every corner of our lives. The plane ticket incident serves as a reminder that AI hallucinations can easily lead to personal losses and unpredictable consequences. This is certainly not the last time AI will cause trouble, and it will definitely not be the biggest trouble AI can create.

Facing more stringent fault-tolerance requirements, the phenomenon of hallucination has become a major obstacle to the further application of generative AI in more advanced and precise industries, especially in important fields such as cultural communication, healthcare, finance, aerospace, etc. How to better address hallucinations will have a decisive impact on the future development of AI.

This sample Sora clip also shows random texts that are commonly seen in generative AI results.
This sample Sora clip also shows random texts that are commonly seen in generative AI results

Conclusion

As we marvel at the rapid advancements in artificial intelligence, we must also approach its development with a sense of awe for the challenges and risks it faces. While it is true that generative AI creates wonders in many fields, pushing the boundaries of human imagination, we must also recognize that the phenomenon of AI hallucinations is still widespread in practices, bringing instability and even damage. We need to carefully consider the reliability of AI technology and its scope of application in any scenario where AI may be involved, otherwise, the seemingly omnipotent AI may instead become a double-edged sword.

The same also applies to the language service industry. Although generative AI gained its popularity through large language models such as ChatGPT, it can only fulfill its proper role when handled with caution and used in a professional way. If you are considering introducing artificial intelligence into the globalization process, make sure to first thoroughly discuss and confirm with your language service provider the possible benefits, drawbacks, and risks of AI solutions. Establish a comprehensive regulatory mechanism while using AI to ensure that your company's interests are not compromised by any means.

Maxsun Translation