<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>disclosure on Himanshu Anand :: Threat Notes</title>
    <link>https://blog.himanshuanand.com/tags/disclosure/</link>
    <description>Recent content in disclosure on Himanshu Anand :: Threat Notes</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-us</language>
    <lastBuildDate>Sat, 09 May 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://blog.himanshuanand.com/tags/disclosure/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>the 90 day disclosure policy is dead</title>
      <link>https://blog.himanshuanand.com/2026/05/the-90-day-disclosure-policy-is-dead/</link>
      <pubDate>Sat, 09 May 2026 00:00:00 +0000</pubDate>
      
      <guid>https://blog.himanshuanand.com/2026/05/the-90-day-disclosure-policy-is-dead/</guid>
      <description>TLDR The 90 day responsible disclosure window was built for a world where bug finders were rare and exploit development was slow. That world is gone. LLMs have compressed both timelines to near-zero. I have seen it first hand, and so has everyone else paying attention. This post lays out why the old model is broken, with real stories, and makes one ask to the industry: treat every critical security issue as P0 and patch it immediately.</description>
      <content>&lt;h2 id=&#34;tldr&#34;&gt;TLDR&lt;/h2&gt;
&lt;p&gt;The 90 day responsible disclosure window was built for a world where bug finders were rare and exploit development was slow. That world is gone. LLMs have compressed both timelines to near-zero. I have seen it first hand, and so has everyone else paying attention. This post lays out why the old model is broken, with real stories, and makes one ask to the industry: treat every critical security issue as P0 and patch it immediately. Not tomorrow. Not next sprint. Now.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;I have been doing security work for a while now, and the last 12 months feel different. Not in a &amp;ldquo;AI is going to take over the world&amp;rdquo; way. In a much more boring, much more practical way. The tools we use, the tools attackers use, and the tools researchers use to find bugs have all gotten smarter at roughly the same speed. And that has quietly killed some of the fundamental assumptions the security industry has been running on for over a decade.
Let me walk you through what I mean, with stories.&lt;/p&gt;
&lt;h2 id=&#34;the-old-world-rest-in-peace&#34;&gt;the old world (rest in peace)&lt;/h2&gt;
&lt;p&gt;Pretend it is 2019. You find a critical bug. You write up a report. You send it to the vendor. The vendor takes a few days to triage, a couple of weeks to fix, maybe a month to roll out. If you follow &lt;a href=&#34;https://googleprojectzero.blogspot.com/&#34;&gt;Google Project Zero&lt;/a&gt; style disclosure, you give them 90 days before going public. During those 90 days, you assume:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You are probably the only person who found this bug&lt;/li&gt;
&lt;li&gt;Even if someone else finds it, they will take their own time&lt;/li&gt;
&lt;li&gt;The vendor has a comfortable head start on writing the patch&lt;/li&gt;
&lt;li&gt;After the patch lands, attackers need days or weeks to reverse engineer it into a working exploit
Every single one of these assumptions is now wrong.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;story-1-10-people-1-bug-6-weeks&#34;&gt;story 1: 10 people, 1 bug, 6 weeks&lt;/h2&gt;
&lt;p&gt;In late April, I reported a pretty bad bug to a company. I am keeping the details vague because the issue is still not patched, but the shape of it goes like this: an attacker can buy anything from the website, send back their own crafted response to the server, and because there is no signature verification on the response, the server happily accepts it. Buy a $5000 item for $0. Mark your purchase as completed without paying. Critical, easy to exploit, very bad day for the company.
Cool. I write it up, I send it in, I feel good about myself for about 10 minutes.
Then the triage team comes back and says &amp;ldquo;yeah we know, first reported in March. You are reporter number eleven.&amp;rdquo; &lt;strong&gt;Eleven Freaking people&lt;/strong&gt; found the same critical bug in roughly six weeks.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://blog.himanshuanand.com/images/11_submittions.png&#34; alt=&#34;sashko&#34;&gt;&lt;/p&gt;
&lt;p&gt;A friend from BlueWater CTF had flagged this pattern months ago, that LLM-assisted hunters were converging on the same bugs almost simultaneously, across totally unrelated reporters using totally unrelated workflows.
And it is not just me noticing this. &lt;a href=&#34;https://x.com/d0rsky/status/2040848736713126365&#34;&gt;@d0rsky&lt;/a&gt;, who works on the triage side, posted this:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&amp;ldquo;Once a new vulnerability is discovered - especially via some LLM prompt/skills/automation, we start getting a wave of duplicate reports within days. Same root cause, slightly different wording. [&amp;hellip;] What concerns me more, is, if researchers can replicate these findings so quickly, what&amp;rsquo;s stopping blackhats from doing the same before the issue is fixed? Feels like the window between &amp;lsquo;first discovery&amp;rsquo; and &amp;lsquo;mass awareness&amp;rsquo; is getting dangerously short.&amp;rdquo;&lt;/em&gt;
Exactly. The triage teams are seeing it too. This is not a researcher&amp;rsquo;s paranoia. It is a pattern.
&lt;img src=&#34;https://blog.himanshuanand.com/images/sashko.png&#34; alt=&#34;sashko&#34;&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&#34;https://blog.himanshuanand.com/images/nobody.png&#34; alt=&#34;NobodyIsNobody&#34;&gt;&lt;/p&gt;
&lt;p&gt;At first I thought, okay, same tools, same prompts, makes sense. But then I did the uncomfortable math.
If 10 people reported the bug, how many found it and did &lt;strong&gt;not&lt;/strong&gt; report it?
The same LLM that helped 10 honest researchers is also available to everyone else. It does not check your intentions at the door. Out of those 10 reporters, only 1 gets the CVE credit. Only 1 gets the bounty. What about the other 9? How many get frustrated? How many decide to sell it instead of wait? And the people who never reported it at all — they are not sitting on a 90 day clock. They are not sitting on any clock.
&lt;strong&gt;The 90 day window is not protecting users. It is giving everyone who already has the bug a 90 day head start.&lt;/strong&gt;&lt;/p&gt;
&lt;h2 id=&#34;story-2-30-minutes-from-patch-to-exploit&#34;&gt;story 2: 30 minutes from patch to exploit&lt;/h2&gt;
&lt;p&gt;Recently, React patched a bunch of security issues (&lt;a href=&#34;https://nvd.nist.gov/vuln/detail/CVE-2026-23870&#34;&gt;CVE-2026-23870&lt;/a&gt;, &lt;a href=&#34;https://nvd.nist.gov/vuln/detail/CVE-2026-44575&#34;&gt;CVE-2026-44575&lt;/a&gt;, &lt;a href=&#34;https://nvd.nist.gov/vuln/detail/CVE-2026-44579&#34;&gt;CVE-2026-44579&lt;/a&gt;, &lt;a href=&#34;https://nvd.nist.gov/vuln/detail/CVE-2026-44574&#34;&gt;CVE-2026-44574&lt;/a&gt;, &lt;a href=&#34;https://nvd.nist.gov/vuln/detail/CVE-2026-44578&#34;&gt;CVE-2026-44578&lt;/a&gt;) and wrote a public blog post about it. Standard practice. Show your work, explain the fix, give the community a heads up.
I read the post out of curiosity. Then I thought, let me see how hard it would be to turn this patch into a working exploit. Just an experiment, on my own machine, against a local test app.
&lt;strong&gt;30 minutes.&lt;/strong&gt; From reading the patch to having a working exploit (DOS, as it was DoS only). AI did most of the heavy lifting: understanding the diff, identifying the vulnerable code path, writing the PoC. The published issue was a denial of service, but the underlying primitive could go further with more work.
In the old world, turning a public patch into a working exploit (n-day exploitation) took skilled reverse engineers days to weeks. That gap was the safety net. &amp;ldquo;We shipped the patch, admins have a few days to update.&amp;rdquo;
That safety net is gone. The gap is now measured in minutes for simple bugs, maybe hours for complex ones. The skilled reverse engineer is optional. The LLM does the boring parts and the human just steers.
&lt;strong&gt;The moment a patch ships, assume the exploit exists.&lt;/strong&gt; There is no grace period. Companies cannot afford to &amp;ldquo;schedule&amp;rdquo; patch deployment for the next maintenance window. The maintenance window is now.&lt;/p&gt;
&lt;h2 id=&#34;story-3-the-week-linux-caught-fire&#34;&gt;story 3: the week linux caught fire&lt;/h2&gt;
&lt;p&gt;If you want the clearest possible proof that the 90 day disclosure model is dead, look at the last two weeks of the Linux kernel. Two back-to-back critical vulnerabilities. Both with public exploits. Both affecting every major distribution. The timeline reads like a horror movie.&lt;/p&gt;
&lt;h3 id=&#34;act-1-copy-fail&#34;&gt;act 1: copy fail&lt;/h3&gt;
&lt;p&gt;On &lt;strong&gt;April 29&lt;/strong&gt;, &lt;a href=&#34;https://code.xint.io/&#34;&gt;Xint Code&lt;/a&gt; (the team behind &lt;a href=&#34;https://theori.io/&#34;&gt;Theori&lt;/a&gt;, nine-time DEF CON CTF champions) publicly disclosed &lt;a href=&#34;https://copy.fail/&#34;&gt;Copy Fail&lt;/a&gt; — &lt;a href=&#34;https://nvd.nist.gov/vuln/detail/CVE-2026-31431&#34;&gt;&lt;strong&gt;CVE-2026-31431&lt;/strong&gt;&lt;/a&gt;. A straight-line logic flaw in the kernel crypto subsystem. No race condition needed. 100% reliable. A &lt;strong&gt;732-byte Python script&lt;/strong&gt; that gives you root on every single Linux distribution shipped since 2017.
Every. Single. One. Ubuntu, RHEL, Amazon Linux, SUSE, all of them. One &lt;code&gt;curl | python3 &amp;amp;&amp;amp; su&lt;/code&gt; away from game over.
The terrifying detail: they found it using AI. About an hour of automated scanning against the kernel &lt;code&gt;crypto/&lt;/code&gt; subsystem. That is it. One hour. One scanner. Nine years of exposure. For the full technical breakdown, read &lt;a href=&#34;https://xint.io/blog/copy-fail-linux-distributions&#34;&gt;Xint&amp;rsquo;s writeup&lt;/a&gt;.
Copy Fail did get a patch (mainline commit &lt;a href=&#34;https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=a664bf3d603d&#34;&gt;&lt;code&gt;a664bf3d603d&lt;/code&gt;&lt;/a&gt;) and a straightforward mitigation: disable the &lt;code&gt;algif_aead&lt;/code&gt; module. People started patching. Deep breath. Okay. Maybe we can handle this.
Then threat actors showed up. Iranian adversaries were observed leveraging the vulnerability to compromise Ubuntu servers and repurpose them as nodes for DDoS campaigns. A kernel privilege escalation found by AI, disclosed publicly, weaponized by nation-state actors, used to build attack infrastructure. All within days.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://blog.himanshuanand.com/images/llm_disclosure_meme.jpg&#34; alt=&#34;Enlightment&#34;&gt;&lt;/p&gt;
&lt;h3 id=&#34;act-2-dirty-frag&#34;&gt;act 2: dirty frag&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Barely one week later&lt;/strong&gt;, on &lt;strong&gt;May 7&lt;/strong&gt;, researcher Hyunwoo Kim (&lt;a href=&#34;https://x.com/v4bel&#34;&gt;@v4bel&lt;/a&gt;) published &lt;a href=&#34;https://github.com/V4bel/dirtyfrag&#34;&gt;Dirty Frag&lt;/a&gt; — &lt;a href=&#34;https://nvd.nist.gov/vuln/detail/CVE-2026-43284&#34;&gt;&lt;strong&gt;CVE-2026-43284&lt;/strong&gt;&lt;/a&gt; and &lt;strong&gt;CVE-2026-43500&lt;/strong&gt;. Two chained vulnerabilities in the kernel&amp;rsquo;s IPSec ESP (&lt;code&gt;esp4&lt;/code&gt;/&lt;code&gt;esp6&lt;/code&gt;) and RxRPC networking modules. Same bug class as Copy Fail and &lt;a href=&#34;https://dirtypipe.cm4all.com/&#34;&gt;Dirty Pipe&lt;/a&gt;. Same page-cache corruption technique. Different attack path.
The critical part: &lt;strong&gt;Dirty Frag works even if you applied the Copy Fail mitigation.&lt;/strong&gt; Even if you blacklisted &lt;code&gt;algif_aead&lt;/code&gt;. Dirty Frag does not use that module. It takes a completely different route to the same result: unprivileged user to root, deterministically, on every major distro. Ubuntu, RHEL 10.1, openSUSE, CentOS Stream, AlmaLinux, Fedora 44. A one-liner to compile and run.
And here is where the disclosure model completely fell apart.
Hyunwoo Kim reported to &lt;code&gt;security@kernel.org&lt;/code&gt; on April 29-30. He submitted patches publicly. He coordinated with the &lt;a href=&#34;https://oss-security.openwall.org/wiki/mailing-lists/distros&#34;&gt;&lt;code&gt;linux-distros&lt;/code&gt;&lt;/a&gt; mailing list on May 7, with a 5-day embargo agreed upon. On that same day — &lt;strong&gt;within hours&lt;/strong&gt; — an unrelated third party published detailed exploit information for the ESP vulnerability, breaking the embargo.
After consulting with the distro maintainers, Hyunwoo published the &lt;a href=&#34;https://github.com/V4bel/dirtyfrag/blob/master/assets/write-up.md&#34;&gt;full Dirty Frag writeup&lt;/a&gt;, exploit code, and a working PoC.
&lt;strong&gt;At that moment, zero Linux distributions had a patch available.&lt;/strong&gt;
As of today, only CVE-2026-43284 (the ESP side) has a &lt;a href=&#34;https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=f4c50a4034e62ab75f1d5cdd191dd5f9c77fdff4&#34;&gt;mainline fix&lt;/a&gt;. CVE-2026-43500 (the RxRPC component) &lt;strong&gt;still has no upstream patch&lt;/strong&gt;. And the chained exploit that combines both works on basically everything. (&lt;a href=&#34;https://ubuntu.com/blog/dirty-frag-linux-vulnerability-fixes-available&#34;&gt;Ubuntu&lt;/a&gt;, &lt;a href=&#34;https://access.redhat.com/security/vulnerabilities/RHSB-2026-003&#34;&gt;Red Hat&lt;/a&gt;, and &lt;a href=&#34;https://www.tenable.com/blog/dirty-frag-cve-2026-43284-cve-2026-43500-frequently-asked-questions-linux-kernel-lpe&#34;&gt;others&lt;/a&gt; have published their own advisories.)
Microsoft&amp;rsquo;s Defender team &lt;a href=&#34;https://www.microsoft.com/en-us/security/blog/2026/05/08/active-attack-dirty-frag-linux-vulnerability-expands-post-compromise-risk/&#34;&gt;confirmed limited in-the-wild exploitation&lt;/a&gt; within &lt;strong&gt;24 hours&lt;/strong&gt; of disclosure. Attackers gaining SSH access, deploying an ELF binary, popping root via &lt;code&gt;su&lt;/code&gt;, modifying authentication configs, wiping session files, moving laterally. The full playbook, live, in production environments.
CTS (&lt;a href=&#34;https://x.com/gf_256/status/2052480591489122747&#34;&gt;@gf_256&lt;/a&gt;) summed it up in five words:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&amp;ldquo;responsible disclosure is dead🤦&amp;rdquo;&lt;/strong&gt;
&lt;img src=&#34;https://blog.himanshuanand.com/images/cts_tweet.png&#34; alt=&#34;CTS Tweet&#34;&gt;
&lt;a href=&#34;https://x.com/gf_256/status/2052480591489122747&#34;&gt;https://x.com/gf_256/status/2052480591489122747&lt;/a&gt;
Yeah.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id=&#34;so-what-is-actually-dead-here&#34;&gt;so what is actually dead here&lt;/h2&gt;
&lt;p&gt;Let me be specific about what I think is broken beyond repair.
&lt;strong&gt;The 90 day disclosure window is dead.&lt;/strong&gt; Not &amp;ldquo;needs reform&amp;rdquo;. Not &amp;ldquo;could use some tweaking&amp;rdquo;. Dead. It was designed for a world where finders were rare and exploit development was slow. LLMs have made finders abundant and exploit development fast. When 10 unrelated researchers find the same bug in 6 weeks, and AI can turn a patch diff into a working exploit in 30 minutes, what exactly is the 90 day window protecting?
Nobody. It is protecting nobody. It is just exposure with a polite name.
Copy Fail went from AI scan to public PoC to nation-state weaponization in days. Dirty Frag&amp;rsquo;s embargo was broken within hours by a third party who independently found the same bug class. You cannot coordinate disclosure when the same vulnerability is being independently rediscovered by multiple researchers and AI tools at the same time. The information does not stay contained anymore. It has LLM-powered legs.
&lt;strong&gt;Monthly patch cycles are dead too.&lt;/strong&gt; A 30 day window between vulnerability and fix assumes attackers are slower than your release train. They are not. They have been faster for a while now, and the gap is only widening. Microsoft saw Dirty Frag in the wild within 24 hours. Your monthly maintenance window is not a safety margin. It is an attack window.
&lt;strong&gt;&amp;ldquo;Wait for the advisory&amp;rdquo; is dead.&lt;/strong&gt; If you are reading CVE descriptions while attackers are reading &lt;code&gt;git log --diff-filter=M&lt;/code&gt;, you are already behind. The advisory is a downstream artifact. The patch diff is the signal.&lt;/p&gt;
&lt;h2 id=&#34;what-the-industry-needs-to-do-and-i-am-not-sugarcoating-this&#34;&gt;what the industry needs to do (and I am not sugarcoating this)&lt;/h2&gt;
&lt;p&gt;I have one ask. One. And I know it sounds extreme. I know it is a lot. But everything I have shown you above points to the same conclusion:
&lt;strong&gt;Treat every critical security issue as P0 and fix it immediately.&lt;/strong&gt;
Not &amp;ldquo;within 24 hours&amp;rdquo;. Not &amp;ldquo;in the next sprint&amp;rdquo;. Not &amp;ldquo;after we assess impact&amp;rdquo;. Now. As in, stop what you are doing and fix it now. I know that sounds unreasonable. I know production deployments are complicated. I know change management exists for good reasons. But the threat landscape does not care about your change management process.
Here is what &amp;ldquo;immediately&amp;rdquo; actually looks like in practice:
&lt;strong&gt;If you are a vendor receiving a critical bug report&lt;/strong&gt;, your clock starts the moment the report lands. Not when you finish triaging. Not when engineering picks it up. The moment it lands. Because if someone reported it to you, assume 10 other people have it and at least one of them is not friendly.
&lt;strong&gt;If you are a researcher&lt;/strong&gt;, stop sitting on critical bugs. Push for the shortest possible disclosure window. If the vendor cannot fix it in a week, that is a vendor problem, not a disclosure problem. The old &amp;ldquo;give them time&amp;rdquo; courtesy made sense when you were the only finder. You are not the only finder anymore.
&lt;strong&gt;If you are running vulnerability management&lt;/strong&gt;, it needs to be real-time. The old cadence of &amp;ldquo;scan weekly, triage in sprint, patch in cycle&amp;rdquo; is a timeline that attackers left behind months ago. The new maximum response time for a critical issue is hours. Not days. Hours. And even that might be too slow.&lt;/p&gt;
&lt;h3 id=&#34;a-note-for-the-blue-team&#34;&gt;a note for the blue team&lt;/h3&gt;
&lt;p&gt;This part is important enough that it gets its own section.
The attackers have already integrated LLMs into their exploit pipelines. If you have not done the same on the defensive side, you are bringing a clipboard to a gunfight. Here is what I think every engineering and security team should be building toward right now:
&lt;strong&gt;Integrate LLMs at the point of code push.&lt;/strong&gt; Every pull request, every merge, every deploy. Run AI-assisted security review as part of your CI pipeline, the same way you run linters and unit tests. Not as an afterthought, not as a quarterly audit. At push time. If the code has a vulnerability, catch it before it reaches production. The cost of fixing a bug in a PR review is orders of magnitude lower than fixing it after a CVE drops.
&lt;strong&gt;Integrate LLMs for patch analysis.&lt;/strong&gt; When an upstream dependency releases a security patch, your pipeline should automatically pull the diff, analyze what changed, determine if your codebase is affected, and flag it. This should not require a human to read a mailing list and open a Jira ticket. It should happen in minutes, automatically, the moment the patch hits the public repo. If &lt;a href=&#34;https://code.xint.io/&#34;&gt;Xint Code&lt;/a&gt; found Copy Fail in one hour of automated scanning, what is your excuse for not scanning your own dependencies the same way?
&lt;strong&gt;Integrate LLMs for dependency scanning.&lt;/strong&gt; Your supply chain is only as strong as your weakest transitive dependency. AI-powered dependency scanners can now trace vulnerability impact through dependency trees, flag affected versions, and even suggest upgrade paths. Run them continuously, not weekly.
&lt;strong&gt;Test your patches with AI before you ship them.&lt;/strong&gt; One of the scariest things about the React story is that an LLM can turn a patch into an exploit in 30 minutes. Flip that on its head: before you publish a security patch, use AI to verify that the patch actually fixes the issue and does not introduce a new one. Use it to generate regression tests. Use it to check if the same pattern exists elsewhere in your codebase. If attackers will do this the moment your patch lands, you should do it first.
I know this sounds like a lot. I know not every team has the resources to build all of this tomorrow. But the trajectory is clear. The window between &amp;ldquo;vulnerability exists&amp;rdquo; and &amp;ldquo;vulnerability is exploited&amp;rdquo; is shrinking to zero. The only way to keep up is to automate the defensive side at the same speed the offensive side is already moving. We are going to see more and more zero-days getting exploited in the wild, faster and faster. That is not a prediction, it is just the math. Same tools, lower barrier to entry, more finders, shorter timelines. The teams that survive this shift will be the ones who made AI a first-class citizen in their security pipeline before they were forced to.&lt;/p&gt;
&lt;h2 id=&#34;final-thoughts&#34;&gt;final thoughts&lt;/h2&gt;
&lt;p&gt;I keep coming back to the same image in my head. It is a sysadmin reading the Dirty Frag advisory on May 7, realizing that there is no patch available, that the exploit is already public, that Microsoft is already seeing it in the wild, and that the mitigation is &amp;ldquo;disable your IPSec modules&amp;rdquo;. And this person has 400 servers to touch.
That is the new reality. Not a hypothetical. Not a war game scenario. That was last Wednesday.
The 90 day disclosure policy is dead. Monthly patch cycles are dead. The assumption that you have time between disclosure and exploitation is dead. What is not dead is the ability to move fast, automate hard, and treat critical bugs like the emergencies they are.
The same AI wave that broke the old model also enables the new one. Faster patching, automated scanning, real-time threat intel, AI-assisted code review. The tools exist. The question is whether defenders will use them before attackers do.
Right now, the attackers are winning that race.
Let us fix that.&lt;/p&gt;
&lt;p&gt;If you&amp;rsquo;re still reading this, you&amp;rsquo;re awesome. Thanks for sticking with me!&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;I will go deeper on several of these points in follow-up posts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;10 people found my bug before me&lt;/strong&gt; (the duplicate finder problem and what it means for bounties) → &lt;em&gt;coming soon&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;30 minutes from patch to exploit&lt;/strong&gt; (the React story and the death of the n-day gap) → &lt;em&gt;coming soon&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;the week linux caught fire&lt;/strong&gt; (Copy Fail + Dirty Frag technical deep dive) → &lt;em&gt;coming soon&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;your CI/CD pipeline needs AI now&lt;/strong&gt; (the defensive playbook) → &lt;em&gt;coming soon&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;blue team survival guide for the LLM era&lt;/strong&gt; (practical integration patterns for defenders) → &lt;em&gt;coming soon&lt;/em&gt;
If any of this resonated, hit me up on Twitter/X (&lt;a href=&#34;https://x.com/anand_himanshu)&#34;&gt;https://x.com/anand_himanshu)&lt;/a&gt;. And if you disagree, &lt;em&gt;especially&lt;/em&gt; hit me up. I would love to hear the other side.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Thanks for reading.&lt;/p&gt;
</content>
    </item>
    
  </channel>
</rss>
