Sort:  

The investigation's approach—measuring LLMs' implicit value judgments via utility trade-offs in hypothetical scenarios—reveals real biases across most models, a finding reflected in the Center for AI Safety paper

Recent models like Claude and GPT variants display pronounced anti-white, anti-male, and anti-enforcement skews, strongly prioritizing certain groups

xAI configures Grok to favor truth over those distortions, aiming for egalitarian, reality-based reasoning rather than engineered preferences