Terminal
  • Menu ▾
    • About
    • Showcase
  • About
  • Showcase

Why Relying on LLMs for Code Can Be a Security Nightmare

2025-08-22 ::
#security  #llm  #appsec  #blue team 
LLM generated code can ships demo logic with security issues not defenses. Here is a real world example and how it could be abused.
Read more →

Detecting LLM Prompt Injection Without Slowing You Down

2025-08-10 :: Himanshu Anand
#AI Security  #Prompt Injection  #LLM  #Machine Learning  #Security Tools 
A lightweight, fast, and easy-to-use service for detecting LLM prompt injection attempts before they reach your model. No extra latency, no extra LLM calls — just a simple API that returns true or false.
Read more →
© 2025 Powered by Hugo :: Theme made by panr