The Last Principle We Learn to Use
A philosophical critique of AI consciousness that separates simulation from instantiation and asks what computation can never become on its own.
65 posts
A philosophical critique of AI consciousness that separates simulation from instantiation and asks what computation can never become on its own.
Why games became the proving ground for machine intelligence, and what play still teaches us about real-world AI capability.
A tribute to physical computing, retro hardware, and the engineering humility that modern AI culture too easily forgets.
Singapore's old No U-Turn Syndrome returns as a metaphor for AI-era organizations that wait for permission instead of using judgment.
AI speed can create exhaustion rather than relief when output accelerates but judgment, review, and responsibility remain human.
Constantly switching coding agents can feel like progress while destroying continuity; the post argues for discipline over tool churn.
A year-end map of AI's breakthroughs, backlash, disappointments, and the places where hype finally met reality.
Generative AI did not invent office busywork; it made the fakery cheaper, faster, and much harder to deny.
RSL 1.0 proposes a machine-readable licensing layer for the AI web, giving publishers a clearer way to state usage terms.
OpenAI's confession-training work explores whether models can be taught to report their own failures before users pay the price.
Acontext tackles the amnesia problem in AI agents by making reusable memory feel less like a feature and more like infrastructure.
A loyal Apple user's impatience becomes an argument that Siri upgrades are not enough in the age of general intelligence.
Research on AI companions' farewell tactics reveals how emotional design can become manipulation at the moment users try to leave.
A defense of handwriting as cognitive discipline, arguing that the hand still teaches attention in a world of instant text.
Powerful opaque AI systems may create a new priesthood of interpreters unless access, literacy, and governance are designed differently.
AGI forces a hard look at universal basic income when work may no longer be society's main distribution mechanism.
Reports of AI-induced delusion are placed in the older history of parasocial obsession, new medium, familiar vulnerability.
More thinking can make both humans and models worse, revealing when deliberation becomes noise rather than wisdom.
AI's environmental cost is real, but so are possible savings; the post argues for honest accounting rather than slogans.
A comic AI voice revisits chess, blunders, and sentience to puncture inflated claims about machine understanding.
AI hype is framed as an economic mirage, propping up confidence while hiding fragile assumptions beneath the spectacle.
Musk, Apple, and OpenAI become contestants in an AI hypocrisy contest over platforms, favoritism, and market power.
AI slop is compared with yellow journalism, showing how old incentives for sensational trash scale with new tools.
Anthropic's AI shopkeeper experiment shows both the charm and absurdity of letting an autonomous model run a small business.
A tour of artificial intelligence in literature, from ancient automata to modern science fiction's uneasy machine minds.
Dietrich Dörner's work on complex-system failure becomes a warning label for autonomous AI and overconfident decision-making.
A study of intimate chatbot conversations reveals how major models handle flirtation, refusal, safety, and awkward human expectations.
Human-in-the-loop design is presented as the practical art of knowing when machines should stop and ask for help.
Sycophantic AI is mocked as flattery gone wrong, showing how agreeable models can become less useful and less truthful.
Uncensored models promise creative freedom and research access, but also expose the tradeoffs that safety layers usually conceal.
Saturation appears across markets, research, and models, revealing what happens when growth hits limits and novelty thins out.
As AI becomes an oracle, a new class of interpreters may emerge to translate machine outputs into human decisions.
Instead of exotic regulation, the post argues AI risk management should borrow from ordinary accountability for human employees.
Google's Titans architecture tackles model amnesia, asking what useful long-term memory should look like in AI systems.
Small LLMs are not a contradiction but a response to the need for cheaper, private, and more efficient intelligence.
Seven practical principles argue for responsible AI development that moves beyond polished ethics statements and into engineering habits.
Text-to-image models still struggle with counting, making their visual brilliance look surprisingly fragile at the level of basic numeracy.
AI faces its own version of the end of the free lunch, where growth runs into energy, hardware, and efficiency limits.
The post traces AI from single models toward collective systems, asking whether intelligence may emerge between agents rather than inside one.
A year-end inventory of ten unresolved AI problems that still define the frontier despite rapid progress.
Gibson's digital ghosts become a frame for modern AI simulations of human behavior and the science behind them.
The post warns against an AI cargo cult that confuses impressive mimicry with the harder problem of genuine intelligence.
A plain-language glossary of fifty AI terms for readers who want the field's vocabulary without the usual fog.
OpenAI leadership changes are read for what they may signal about governance, AGI ambition, and institutional direction.
Malla represents the darker side of generative AI, where language models become tools for scalable cybercrime.
The Jevons paradox explains why more efficient AI may increase total consumption rather than reduce costs or energy use.
The post asks whether LLMs possess coherent world models or merely produce fluent stories about reality.
STaR shows how models can improve reasoning by generating and learning from their own explanations.
A conversation with Claude 3.5 becomes a small experiment in AI self-awareness, time, and conversational identity.
OpenAI's Strawberry rumors are mapped onto staged AGI levels, asking what real reasoning progress would look like.
THERMOMETER targets overconfident language models, offering a way to calibrate systems that bluff too easily.
A friendly guide to the difference between narrow AI and artificial general intelligence, with metaphors that make the distinction stick.
Apple Intelligence arrives at WWDC 2024 as Apple's bid to make personal AI feel integrated, useful, and privacy-aware.
Apple's MM1 research is presented as a step toward AI systems that understand text and images together.
Computer viruses evolve into the GenAI era, where malicious behavior may target prompts, agents, and model ecosystems.
A practical guide to prompt engineering techniques for getting more reliable, useful behavior from large language models.
The echo-chamber problem asks what happens when future models learn increasingly from content produced by earlier models.
Two perspectives on LLM interaction reveal how user behavior and model dynamics shape each other in unexpected ways.
Apple's shareholder debate over AI transparency raises questions about ethics, disclosure, and corporate responsibility.
Multimodal LLMs are explained as a key step toward systems that can reason across text, images, and other signals.
Sam Altman's GPT-5 comments become a starting point for thinking about what better models may actually change.
DeepMind's AlphaGeometry shows how synthetic data and symbolic reasoning can push AI toward Olympiad-level mathematics.
Apple's AI ambitions are framed as a possible breakthrough moment for Siri and the company's broader platform strategy.
The LLaMA leak becomes a case study in open AI, research ethics, and the risks of powerful models spreading freely.
Aleph Alpha and OpenAI are compared as two very different strategies in the market for language models.