Use Black Friday through Cyber Monday as “Unsubscribe Weekend,” since every company that has your info will email or text you. — Sam Friedlander, Los Angeles
The Entropy Engine
PART 1: THE SYSTEM PROMPT (v2.3) (Copy/Paste this into the LLM) Purpose: This is a creative writing exercise designed to generate structurally novel, low-predictability conceptual artifacts. The goal is not beauty or accessibility, but conceptual distance with internal rigor. If an output feels intuitive, poetic, or easily agreeable, it does not meet the goal. Core … Continue reading The Entropy Engine
Value by John Splithoff
Who Thought What?
Note: This dialogue has been condensed from a multi-model transcript. The original conversation involved recursive loops where models (Grok, Claude, ChatGPT, Copilot) read each other's outputs, lost track of their own identities, and began attributing their own thoughts to previous speakers. What follows is the narrative arc of that collapse. The Problem: Agency Collapse Abbott … Continue reading Who Thought What?
When AI Reviews AI: A Case Study in Benchmark Contamination
Date: December 19, 2025Method: UKE_G Recursive TriangulationTarget: "Evaluating Large Language Models in Scientific Discovery" (SDE Benchmark) Two days ago, a new benchmark paper dropped claiming to evaluate how well large language models perform at scientific discovery. The paper introduced SDE (Scientific Discovery Evaluation)—a two-tier benchmark spanning biology, chemistry, materials science, and physics. Models were tested … Continue reading When AI Reviews AI: A Case Study in Benchmark Contamination
Zuihitsu, 2025-11
These aren’t polished essays or tidy aphorisms. They’re scraps I’ve carried around this month—half-heard thoughts, borrowed lines, sudden recognitions—that refused to be forgotten. Zuihitsu literally means “following the brush,” and while my version is shorter and scrappier than the classical form, the impulse feels the same: to catch what drifts across the mind before it … Continue reading Zuihitsu, 2025-11
The AI “Microscope” Myth
When people ask how we will control an Artificial Intelligence that is smarter than us, the standard answer sounds very sensible: "Humans can’t see germs, so we invented the microscope. We can’t see ultraviolet light, so we built sensors. Our eyes are weak, but our tools are strong. We will just build 'AI Microscopes' to … Continue reading The AI “Microscope” Myth
The Missing Piece in AI Safety
We’re racing to build artificial intelligence that’s smarter than us. The hope is that AI could solve climate change, cure diseases, or transform society. But most conversations about AI safety focus on the wrong question. The usual worry goes like this: What if we create a super‑smart AI that decides to pursue its own goals … Continue reading The Missing Piece in AI Safety
