TLDR The 90 day responsible disclosure window was built for a world where bug finders were rare and exploit development was slow. That world is gone. LLMs have compressed both timelines to near-zero. I have seen it first hand, and so has everyone else paying attention. This post lays out why the old model is broken, with real stories, and makes one ask to the industry: treat every critical security issue as P0 and patch it immediately.
I was poking around the OpenSSL source code recently. Not really hunting for anything specific (one of the most heavily audited codebases), just curious about how the new post-quantum crypto stuff was wired up in version 4.0.0. I went in expecting to find nothing interesting. Instead I tripped over a single-character logic bug that leaks cryptographic randomness onto the stack on every signing call.
Quick disclaimer: I am not a crypto person.
TLDR
While looking for a way to stream the India vs Pakistan cricket match on 14th September 2025, I stumbled across a suspicious search result on a europa.eu dev subdomain. It was being abused for blackhat SEO and redirecting users to scam streaming sites. I traced similar behavior across other high-profile domains, reported the issue to CERT-EU via email (after some Twitter help) and the problem was later confirmed as fixed on 6th November 2025.
TLDR
I have recieved a legit Zoom doc email from HR “while on job hunt” . It redirected to a site with a fake “bot protection” gate and then to a Gmail credential phish. The attackers exfiltrate creds live over WebSocket and even validate them in the backend. Keep reading for detailed analysis.
look mom HR application look mom no job Okay, this is kind of funny (in a “please tell me this is not my life” way).
Starting Point It all began with a tweet:
sdcyberresearch on X
This tweet hinted at a Magecart-style campaign involving malicious JavaScript injection to skim payment data.
Initial Sample The script was hosted at:
https://www.cc-analytics[.]com/app.js
The original code was heavily obfuscated:
(function() { function _0x1B3A1(_0x1B563, _0x1B3FB, _0x1B455, _0x1B509, _0x1B4AF, _0x1B5BD) { _0x1B4AF = function(_0x1B3A1) { return (_0x1B3A1 < _0x1B3FB ? '' : _0x1B4AF(parseInt(_0x1B3A1 / _0x1B3FB))) + ((_0x1B3A1 = _0x1B3A1 % _0x1B3FB) > 35 ?
A lightweight, fast, and easy-to-use service for detecting LLM prompt injection attempts before they reach your model. No extra latency, no extra LLM calls — just a simple API that returns true or false.