This podcast was generated with Podidex, your personal podcast creator. Overview Tony Hoare spent his life proving software could be correct. This week, Amazon proved it isn't—blaming AI-generated code for a six-hour outage that took down their entire storefront. We're covering the death of a Turing Award winner, a billion-dollar bet against LLMs, and why compiling Linux on RISC-V still takes five times longer than on x86. The tension between legacy and hype has never been sharper. Tony Hoare died on March 5th Tony Hoare died on March 5th. He was 92. The man invented quicksort back in the 1960s, and that algorithm still runs underneath practically every programming language today. He also built Hoare logic for proving programs correct, won the Turing Award, taught at Oxford, and helped shape ALGOL. Here's my favorite story. He's working at Elliott Brothers Ltd and tells his boss he knows a faster way to sort than what they just built. The boss bets him sixpence he's wrong. Hoare codes up quicksort and collects his coin. But get this—he'd already implemented the slower version exactly as asked. That's pure professionalism right there. His background was unconventional. He studied Classics and Philosophy, then got intensive Russian training through the Joint Services School for Linguists. During National Service, he demoed early computers at fairs, even in the Soviet Union. He knew those machines inside out. Later, at Microsoft Cambridge, he'd slip out most afternoons to catch movies at the Arts Picturehouse. Colleagues were shocked when he casually mentioned seeing the latest flick. Jim Miles met him regularly in Cambridge over five years. He brought a printout of a blog post just to break the ice once. Hoare's memory stayed pinpoint sharp even in his late eighties. They'd chat career stuff half the time, films and future tech the other half. There's this quote floating around online about Hollywood geniuses solving problems instantly versus real researchers grinding for years. Hoare liked the sentiment, though he wasn't sure he actually said it. Now, here's the intriguing bit. When discussing Moore's Law limits and quantum computing, Hoare mentioned that governments probably have technology years ahead of what we imagine. He got cagey when pressed about cracking crypto primes. Was he trolling? Or did he know something? His humor was top-notch either way. Miles misses that sharpness, that patience, that warmth. The community responded massively when the news broke—1693 points and over 60 comments. Developers shared memories of courses with him, Dijkstra, and Feijen. They praised CSP, that's Communicating Sequential Processes, which inspired Occam and Go channels. And yeah, they mentioned the null reference—his 'billion dollar mistake' that causes crashes everywhere. He pushed formal methods hard, dreaming of verifying all software. That didn't go mainstream back then. But with AI generating code now? Maybe his vision finally finds its moment. Quicksort endures. Hoare logic shapes modern verification tools. Without his work, concurrent systems would be chaos. Rest in peace, Tony. You shaped our code. Zig's compiler just got a monster upgrade Zig's compiler just got a monster upgrade. Matthew Lugg merged a 30,000-line pull request on March 10th after grinding for two, arguably three months. The big win? A total rework of type resolution logic into something way cleaner and more logical. He cleaned up heaps of messy compiler code too. Here's the key part for coders. Zig now skips analyzing fields in types you never actually initialize. Think types that double as namespaces, like standard I-O Writer. No more dragging in unwanted code. Picture this: you've got a struct Foo with an evil field that triggers a compile error, but it holds a constant value too. Use only that constant at compile time? It compiles fine now. That's a huge quality-of-life fix. Dependency loops used to spew useless errors. Not anymore. They'll pinpoint the exact cycle, like when Foo's inner field needs Bar while Bar queries Foo's alignment. Notes flag exactly where to break it. The team fixed tons of incremental compilation bugs too. Over-analysis is gone, so updates fly faster now. That's massive for daily development workflows. February 13th brought io_uring and Grand Central Dispatch to standard I-O Evented. Both use userspace stack switching, basically green threads. It's still experimental and needs error handling tweaks plus more tests. But you can swap I/O backends effortlessly in one app now. The whole story racked up 150 points on Hacker News. Makes you wonder: will this push Zig closer to stable production use? Builds feel snappier already. Ever wonder why porting Linux to new architectures feels... Ever wonder why porting Linux to new architectures feels like such a slog? Marcin Juszkiewicz dove into Fedora's RISC-V efforts three months back. He triaged their tracker down to just 17 open issues and fired off 86 pull requests for packages ranging from llvm15 down to simple games like iyfct. Most merged for Fedora 43. That's genuine progress. But here's the killer stat. Building binutils version 2.45 drags on forever on RISC-V hardware: 143 minutes on an 8-core machine with 16 gigabytes of RAM. Compare that to x86_64's zippy 29 minutes on similar 8 cores, or aarch64's 36 minutes. They disabled link-time optimization to shave memory use and time, since these RISC-V boards top out like weak Arm Cortex-A55 chips. Meanwhile, s390x hit 37 minutes on just 3 cores. Wild, right? QEMU saves the day sometimes. Marcin builds llvm15 in 4 hours using 80 emulated cores, beating the Banana Pi's real 10.5 hours. Future boards like the Milk-V Titan might help, packing up to 64 gigs of RAM. Still, Fedora needs rackmount servers cranking binutils under an hour with link-time optimization enabled for RISC-V to join the primary architectures. Without that? No dice. The post hit 208 points on Hacker News, with folks debating whether silicon or software holds it back. Amazon just mandated senior engineer sign-offs for any... Amazon just mandated senior engineer sign-offs for any AI-assisted code written by juniors and mids. This comes after a brutal string of outages hammered their ecommerce site. It went down nearly six hours this month alone. Customers couldn't check prices, access account details, or buy anything. The culprit? An erroneous software deployment that slipped through. The critical detail? Senior VP Dave Treadwell emailed staff about poor site availability and is pushing a close examination at Tuesday's This Week in Stores Tech meeting. Normally optional, now he's asking everyone to show up. Briefing notes flag a worrying trend: incidents with high blast radius tied directly to generative AI changes. Best practices for this stuff? Not fully there yet. Surprisingly, AWS hit two AI-related snags too. In mid-December, their Kiro AI tool caused a 13-hour outage on a cost calculator in China—it literally deleted and recreated the environment. Engineers report seeing more Severity 2 incidents daily since January's 16,000 layoffs. Amazon denies any connection, calling it normal operations review. But the thread generated 537 points debating the obvious pattern. What does this mean for the industry? Seniors now bottleneck juniors' AI productivity boosts, potentially killing those efficiency gains entirely. One theory suggests AI hype simply outpaced safeguards, forcing companies to slam on human brakes. Teams might ship slower, but more reliably. That's the trade-off staring us all down right now. Yann LeCun just raised over a billion dollars for his... Yann LeCun just raised over a billion dollars for his new startup, Advanced Machine Intelligence, or AMI. It's based in Paris and values the company at 3.5 billion bucks. Big names like Cathay Innovation, Greycroft, and Bezos Expeditions co-led the round. Backers include Mark Cuban, ex-Google CEO Eric Schmidt, and French billionaire Xavier Niel. LeCun, Meta's former chief AI scientist and 2018 Turing Award winner, cofounded it with Alexandre LeBrun as CEO and Saining Xie as chief science officer. Here's the bold pitch. LeCun slams large language models. He says pushing them to human-level intelligence is total nonsense. Human thinking roots itself in the physical world, not just words, so AMI builds world models instead. These could power real applications, like simulating aircraft engines to cut emissions or boosting reliability in manufacturing and robotics. The team includes Meta veterans Michael Rabbat, Laurent Solly, and Pascale Fung. Offices span Paris, Montreal, Singapore, and New York, where LeCun remains a professor. It's his first business venture since quitting Meta in November 2025, after they shifted focus to LLMs over his world model research at Meta's FAIR lab. Why bet big here? Enterprise applications demand grounded AI that plans safely with persistent memory. While OpenAI and Anthropic chase LLM scale, LeCun eyes open-source tools for industries like biomedicine. The announcement sparked 426 points of discussion, with folks debating whether this cracks artificial general intelligence limits that LLMs simply can't touch. Ambitious? You bet. First up, Debian's wrestling with AI-generated code First up, Debian's wrestling with AI-generated code. In February, Lucas Nussbaum floated a general resolution to allow AI-assisted contributions—think partial or full LLM output—but only with strict rules. Contributors must tag big chunks as AI-generated, fully understand the code, vouch for its security and licenses, and avoid feeding sensitive data like private mailing lists into tools. Debates exploded over terminology—AI's too vague, stick to LLMs—plus ethics, newbie onboarding, and concerns about slop quality. No vote happened by March; they're handling it case-by-case instead. That's probably smart—rushing policy on fuzzy tech could've backfired badly. The post grabbed 316 Hacker News points. Shifting gears, George Hotz, better known as geohot, called out agent hype on March 11th. Forget those tales of 37 or 69 agents spawning billion-dollar empires overnight while you eat breakfast. AI isn't magic; it's search and optimization, like we've seen forever in computer science. His advice? Ditch rent-seeking gigs that pile on complexity since big players will crush you there anyway. Create real value for others, ignore the returns, and tune out social media's toxic fear messaging about updating your workflow or becoming worthless. Solid advice amid all the layoff anxiety. 104 points on the board. Last, Bassim Meledath mapped eight distinct levels of agentic engineering. It starts basic with tab-complete Copilot, then moves to agent IDEs like Cursor. It jumps to context engineering with dense prompts and smart rules files, then compounds with planning, delegating, assessing, and codifying lessons in CLAUDE.md. Level five adds Model Context Protocols, or MCPs, and skills for tools like Playwright or pull request reviews; six builds harnesses with feedback loops for self-fixing agents. Seven covers background agents running asynchronously, and eight orchestrates teams via dispatchers. Team levels must sync or your throughput tanks—it's a multiplayer game. Why care? Because model gains multiply exponentially across these leaps. 147 points. Wrap-up Tony Hoare proved software could be correct; Amazon just proved AI-generated code often isn't. The tension between moving fast and building right is back with a vengeance. Whether it's RISC-V hardware limitations or LeCun's billion-dollar bet against LLMs, one thing's clear: the easy answers are drying up. The next era belongs to those who verify their work. Keep that in mind when the agents come knocking. That's it for this episode. This was generated entirely by Podidex. With Podidex, you can turn any website into a podcast. Just paste a URL, pick a voice and style, and get a podcast episode in under a minute. You can also set up automated podcasts that generate new episodes on a schedule from your favorite sites. Visit podidex.com to create your first personal podcast for free.