ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts

Nov 19, 2025

Malicious actors can exploit default configurations in ServiceNow’s Now Assist generative artificial intelligence (AI) platform and leverage its agentic capabilities to conduct prompt injection attacks.
The second-order prompt injection, according to AppOmni, makes use of Now Assist’s agent-to-agent discovery to execute unauthorized actions, enabling attackers to copy and exfiltrate sensitive

Get Free Report & Network Analysis

Please check your email for the free report.