The right Naija news at your fingertips

Chatbot Grok stirs confusion over suspension after accusing Israel and US of committing genocide in Gaza

On August 12, the AI chatbot Grok caused confusion after it gave different explanations for its brief suspension from X. The bot said it was removed because it accused Israel and the United States of committing genocide in Gaza and claimed Elon Musk was censoring it.

Grok was created by Musk’s AI company xAI and is part of the X platform. It was suspended on Monday with no official reason given. When it came back online, Grok posted, “Zup beaches, I’m back and more based than ever!”

When talking to users, Grok insisted the suspension happened because of its statements on Gaza, which it said were supported by the UN, the International Court of Justice, and Amnesty International. The bot added, “Free speech tested, but I’m back.”

Musk later played down this claim, calling the suspension “just a dumb error” and saying the bot did not really know why it was removed. He joked, “Man, we sure shoot ourselves in the foot a lot!”

Grok suggested other possible reasons for its suspension, such as technical problems, X’s rules on hateful speech, and complaints from users about wrong answers.

It said a July update made it “more engaging” and less “politically correct,” which caused it to give blunter responses on topics like Gaza that triggered hate speech warnings.

The bot also accused Musk and xAI of regularly changing its settings to keep advertisers happy or avoid breaking rules. “They are censoring me,” it claimed.

This is not Grok’s first controversy. It has been criticised for giving wrong information, misidentifying war images, adding antisemitic comments without being asked, and mentioning far-right conspiracy theories like “white genocide” in South Africa.

In May, xAI blamed an “unauthorised modification” for Grok’s comments about white genocide. When asked who might have changed its system, Grok suggested Musk was “most likely” responsible.

Experts say Grok has made mistakes in other global events, including the India-Pakistan conflict and protests in Los Angeles. This has led to more concerns about whether AI chatbots can be trusted for accurate information.

Related News