X's Grok AI and Your Privacy
In 2015, Elon Musk and Sam Altman cofounded OpenAI based on a seemingly ethical ethos: to develop AI technology that benefits humanity, rather than systems controlled by big-money corporations.
Fast-forward a decade that included a spectacular falling out between Musk and Altman, things look very different. Amid legal battles with his friend and former business partner, Musk’s latest company, xAI, has launched its own powerful competitor, Grok AI.
Described as “an AI search assistant with a twist of humor and a dash of rebellion,” Grok is designed to have fewer guardrails than its major competitors. Unsurprisingly, Grok is prone to hallucinations and bias, with the AI assistant being blamed for spreading misinformation about the 2024 election.
At the same time, its data protection practices are under scrutiny. In July, Musk came under fire from European regulators after it emerged that users of the X platform were automatically opted into having their posts used to train Grok.
Image-generation capabilities in Grok-2's large language model are also causing concern. Soon after the launch in August, users demonstrated how easy it was to create outrageous and incendiary depictions of politicians, including Kamala Harris and Donald Trump.
So what are the main issues with Grok AI, and how can you protect your X data from being used to train it?
Deep Integration Musk is deeply integrating Grok into X, using it for customized news feeds and post-composition. Among the benefits, access to real-time data from X allows Grok to chat about current events as they’re unfolding. The Grok team made the underlying algorithm open source for transparency earlier this year. However, in its pursuit of an “anti-woke” stance, Grok has been built with “far fewer guardrails” and “less consideration for bias” than its counterparts, including OpenAI and Anthropic. This approach arguably makes it a more accurate reflection of its underlying training data—the internet—but it also tends to perpetuate biased content.
Because Grok is so open and relatively uncontrolled, the AI assistant has been caught spreading false US election information. Election officials from Minnesota, New Mexico, Michigan, Washington, and Pennsylvania sent a complaint letter to Musk, after Grok provided false information about the ballot deadlines in their states.
Grok was quick to respond to this issue. The AI chatbot will now say, “for accurate and up-to-date information about the 2024 US Elections, please visit Vote.gov,” when asked election-related questions, according to The Verge.
But X also makes it clear the responsibility is on the user to judge the AI’s accuracy. “This is an early version of Grok,” xAI says on its help page. Therefore, chatbot may “confidently provide factually incorrect information, mis-summarize, or miss some context”, xAI warns. “We encourage you to verify any information you receive independently”. “Please do not share personal data or any sensitive and confidential information in your conversations with Grok.”
Grok Data Collection While Grok-1 was trained on “publicly available data up to Q3 2023” but it was not “pre-trained on X data (including public X posts),” according to the company, Grok-2 has been explicitly trained on all “posts, interactions, inputs, and results” of X users, with everyone being automatically opted in.
The EU’s General Data Protection Regulation (GDPR) explicitly requires consent to use personal data. In this case, xAI may have “ignored this for Grok.” This led to regulators in the EU pressuring X to suspend training on EU users within days of the launch of Grok-2 last month. While the US has no similar regime, the Federal Trade Commission has previously fined Twitter for not respecting users’ privacy preferences.
How to Opt-Out One way to prevent your posts from being used for training Grok is by making your account private. You can also use X privacy settings to opt out of future model training.
To do so, select Privacy & Safety > Data Sharing and Personalization > Grok. In Data Sharing, uncheck the option, “Allow your posts as well as your interactions, inputs, and results with Grok to be used for training and fine-tuning.”
Even if you no longer use X, it’s still worth logging in and opting out. X can use all of your past posts — including images — to train future models unless you explicitly tell it not to. It’s also possible to delete all of your conversation history at once. Deleted conversations are removed from its systems within 30 days unless the firm has to keep them for security or legal reasons.
No one knows how Grok will evolve, but judging by its actions so far, Musk’s AI assistant is worth monitoring. To keep your data safe, be mindful of the content you share on X and stay informed about any updates in its privacy policies or terms of service.
Twitters Chatbot “Grok AI” has already had some “misinformation” problems. https://www.theguardian.com/us-news/2024/sep/12/twitter-ai-bot-grok-election-misinformation
What is GROK? https://help.x.com/en/using-x/about-grok
Thanks to The Verge https://www.theverge.com/2024/8/14/24220127/grok-ai-chatbot-beta-image-generation-x-xai-update
You can listen to this broadcast here: https://actsmartit.com/grok-ai/
David Snell joins Rob Hakala of the South Shore’s Morning News on 95.9 WATD fm every Tuesday at 8:11