Cognitive security is a category we've named and defined at Galahad. It refers to the human vulnerability chain in AI use - the organisational attack surface that exists between the person typing a prompt and the business outcome that prompt influences.
This isn't about AI ethics or responsible AI in the abstract. It's about specific, exploitable weaknesses: employees pasting confidential data into AI tools, teams building critical workflows on unaudited prompts, decision-makers trusting AI outputs without understanding how those outputs were generated, and entire organisations exposing their strategic thinking to systems that may be logging, training on, or leaking that information.
Traditional cybersecurity protects systems. Cognitive security protects the thinking that happens when humans and AI systems interact.