Here is Chris Martin’s posting on LinkedIn.com:
Here is Dominik Mate Kovacs’ posting on LinkedIn.com:
The AI ‘hivemind’: Why so many student essays sound alike — from hechingerreport.org by Jill Barshay
A study of more than 70 large language models found similar answers to brainstorming and creative writing prompts
The answers were frequently indistinguishable across different models by different companies that have different architectures and use different training data. The metaphors, imagery, word choices, sentence structures — even punctuation — often converged. Jiang’s team called this phenomenon “inter-model homogeneity” and quantified the overlaps and similarities. To drive the point home, Jiang titled her paper, the “Artificial Hivemind.” The study won a best paper award at the annual conference on Neural Information Processing Systems in December 2025, one of the premier gatherings for AI research.
AI Has No Moral Compass. Do You? — from michelleweise.substack.com by Michelle Weise & Dana Walsh
Why the Age of AI Demands We Take Character Formation Seriously
Here’s something to chew on:
Anthropic, the company behind Claude — a chatbot used by 30 million users per month — has exactly one person (whom we know of) working on AI ethics. One. A young Scottish philosopher is doing the vital work of training a large language model to discern right from wrong.
I don’t say this to shame Anthropic. In fact, Anthropic appears to be the only company (that we know of) being explicit about the moral foundations and reasoning of its chatbot. Hundreds of millions of users worldwide are leveraging tools from other LLMs that do not appear to have an explicit moral compass being cultivated from within.
I raise this because this is yet another example of where we are: extraordinary technical power advancing without an equally strong moral infrastructure to support it.
Why do we keep producing people who are skilled but not wise?








