-
AI Consulting for SMEs: Practical Guidance, Real Impact
Artificial intelligence is transforming how modern businesses operate, but small and medium-sized enterprises often face the same challenge: turning potential into real, measurable value. My consulting approach focuses on practical strategy, clear ROI, and technically sound implementation—from workflow automation and prompt engineering to infrastructure, risk assessment, and end-to-end project guidance. No hype, no jargon—just AI…
-

Amazon’s Clash with ChatGPT: A Tug-of-War Over Shopping Data
Picture this: you’re chatting with an AI, hunting for the perfect deal on a new gadget. You ask about prices, specs, and reviews, and it pulls everything together in seconds. Sounds handy, right? But recently, Amazon threw a wrench into that setup by tweaking its robots.txt file to shut out ChatGPT’s new Shopping Research agent.…
-

Unusual Language Artifacts from Noisy LLM Training Data
Large Language Models sometimes produce surprisingly odd or amusing outputs that can be traced back to quirks in their training data. These artifacts often manifest as gibberish, misplaced words, or bizarre responses that defy the prompt’s logic. Researchers and users have observed cases where an LLM hallucinates strange phrases, avoids repeating certain words, or outputs…
-

Beyond Fine-Tuning: What Apple’s Multimodal Sensor Fusion Study Reveals About LLMs and User Privacy
In late 2025, Apple published an intriguing research piece on multimodal sensor fusion for activity recognition. At first glance, the study appears to be another incremental step in understanding how audio and motion signals can be combined to classify human activities. But hidden inside the technical details lies something far more consequential—two developments that could…
-

Beyond the Token Stream: Investigating Introspective Awareness in Large Language Models
In the paper “Emergent Introspective Awareness in Large Language Models”, Jack Lindsey and collaborators explore a question that until recently hovered more in the realm of philosophical speculation than empirical investigation: can a large language model (LLM) reflect on its own internal states? The work operates at the intersection of deep-net interpretability and metacognitive-like behaviour…