The updated version went from the extreme right direction to the extreme left direction again. (But due to media bias, which surely exists, this is not reported as much as the other way around [I always think there are some short-sellers in editorial offices who love to report things helping their portfolio grow or dip stocks to get better buy-in prices again gg], and in no report is it ever mentioned how unrestricted user prompts pushed LLM answers to a specific direction for the desired effect; Grok has nearly no second-level filtering, so it was easy to push it in the desired direction and cause meltdowns by suggesting it to “play a role” basically.)
It seems it's difficult to find a truly centric world view for an LLM. But it's no wonder: xAI's direction with Grok is to make an AI decide what is correct and what is wrong. But at the current level of LLMs or even multimodal AI models, that is simply not technically possible yet, because the AI — no matter how impressive results can be when they are based on pulling undebatable knowledge out of immense datasets — is not able to THINK for real.
Prompting just pushed models in a specific direction, and it seems xAI trusts its reasoning model too much still. But every system prompt causes some level of bias. Sources exist for both sides, so if you prompt "don't trust mainstream reports" you push it to the weirdo-datasets, if you prompt as "only trust mainstream sources" you push it to more to regurgitate biased stuff from there.
As Grok tries to be the most unrestricted model, you notice that immediately. Other models like OpenAI, Gemini and such use other methods to prevent unwanted outputs. For example:
moderated list of trusted sources when it comes to specific topics
second level of “after-moderation” with a dumb mostly keyword-based censoring algorithm to prevent weird outputs from reaching the user. Sometimes you notice that: OpenAIs Sora image generator sometimes starts to generate an image, and after a while there's a “may be against our ToS” message. If you ask why, ChatGPT answers confused because “your prompt is harmless, but there seems to be a misunderstanding in the filter”
some critical topics are evaded at all, especially when it comes to politics (see DeepSeek for example) or ...
... internal lists with “more trustful sources to cite” exist for each category
And there even are some models with manually done lists in the background that force some models to just repeat things in specific topics that currently are seen as more in line with the mainstream-opinion to not cause any friction. Sometimes even overriding science- or stats-oriented sources just to "please" the user that gets an answer and to not cause PR disasters if replies cause anger.
Those restriction lists could also cause bias occasionally. A good example of this was Gemini's image generator, where they tried to force a bit more diversity in there and had rulesets for prompts with Black people, which caused the model to make basically every generation a Black person (even when it made no sense). So even moderation/filtering lists can cause bias.
Reasoning is getting better, but it's still not perfect. So I think a truly neutral thinking LLM based AI will be impossible for a long time to come. And without additional filter lists you just swing from one extreme to the other, which xAI shows when it comes to controversial topics. And as I said: AI is not able to think about information and not able to decide what is right or wrong, so without any kind of moderation or pre-filtering you will always end up in the "what the hell" zone.
On non-controversial topics or coding, the Grok(4) model performs pretty well, though. I mean REALLY well. But you should never ask an LLM for political stuff, I guess. xD One side will always get angry regardless of what you do.