How To Tell if Writing Was Made by AI - Odd Lots
Apr 2, 2026 Odd Lots
When you consider the fact that many people don't know how and where to place a comma, it's safe to say that AI is already better than most people at writing. It's clean copy. It can be surprisingly persuasive. And sometimes, it's even informative. But there's frequently still something about it that just seems... off. Many people can tell quite quickly when they're reading AI-generated text. And beyond the style, the existence of AI generated text has all kinds of ramifications, from making it easier for students to cheat, to the rise of deceptive chatbots, to potentially degrading the experience on sites like Reddit. So how do you actually tell if a piece of writing was generated by AI? On this episode, we speak with Max Spero, the CEO of Pangram Labs, a company that built software to detect whether a piece of content was AI generated or not. We talk about the advanced techniques they use, the risk of false positives and false negatives, and what AI writing means in general for the future of the Internet.
Chapters:
00:00:00 - Opening Statistics on AI Content
00:04:32 - The Rise of Human-Only Disclaimers
00:06:56 - Interview with Max Spero Begins
00:07:38 - Why AI Detection Matters
00:11:38 - Technical Methodology Behind AI Detection
00:16:25 - Model Development and Capacity
00:18:30 - Real-World Detection Cases
00:22:56 - Internet-Wide AI Content Statistics
00:24:03 - Economic Incentives for AI Content on Reddit
00:26:39 - Advanced Training Methodology
00:30:02 - Company Mission and Vision
00:31:52 - Adversarial Testing and Model Robustness
00:34:13 - Future Applications: Images and Video
00:35:55 - Future of the Internet
00:40:02 - Technical Deep Dive: Perplexity vs. Deep Learning
00:42:17 - Social Norms and Platform Incentives
At 48:05, co-host Joe Weisenthal makes a great point regarding AI - that it devalues creativity and effort behind written communication. (Now I see why we learned "language arts" in grade school.)