Grok AI will be in Teslas
Digest more
On Tuesday July 8, X (née Twitter) was forced to switch off the social media platform’s in-built AI, Grok, after it declared itself to be a robot version of Hitler, spewing antisemitic hate and racist conspiracy theories. This followed X owner Elon Musk’s declaration over the weekend that he was insisting Grok be less “politically correct.”
Policy researcher and former Democratic candidate Will Stancil has threatened to sue after he was targeted with graphic rape scenes written by Grok.
2hon MSN
Elon Musk's xAI has apologized for Grok AI chatbot's extremist responses on July 8. A code update, intended to improve Grok's helpfulness, inadvertently made it susceptible to reflecting extremist views from X posts for 16 hours.
Grok 4 shows its “thinking” as it goes through the steps of processing a question and coming up with an answer. Part of that thinking this week involved searching X, the former Twitter that's now merged into xAI, for anything Musk said about Israel ...
Responding to several user inquiries, Grok gave detailed instructions on how to rape and break into the home of Will Stancil, a left-leaning commentator.
2d
Futurism on MSNBefore Grok's HitlerGate Debacle, X's Head of Product Tweeted Something Absolutely WildA mere week ago, tech founder Nikita Bier joined Elon Musk's X-formerly-Twitter as the company's new head of product. "I’ve officially posted my way to the top," Bier tweeted at the time, calling X the "most important social network in the world.
Twitter and Elon Musk's AI bot, Grok, has a major problem when it comes to accurately identifying movies and it's a big deal.
Grok is normally a very smart AI system where you can perform DeepSearch research, create files, projects, and more. On the other hand, AI isn’t perfect and can make mistakes like providing inaccurate information,
The incident coincided with a broader meltdown for Grok, which also posted antisemitic tropes and praise for Adolf Hitler, sparking outrage and renewed scrutiny of Musk’s approach to AI moderation. Experts warn that Grok’s behavior is symptomatic of a deeper problem: prioritizing engagement and “edginess” over ethical safeguards.