Addressing SaaS security challenges in the age of GenAI

A response to JPMorganChase’s open letter

Thank you to JPMorganChase and Patrick Opet for their open letter addressing the evolving landscape of technology and the critical role of suppliers in this journey. We appreciate the great practical and proactive approach being taken to ensure a robust and resilient supply chain.

For this purpose of our response, we’ve focused on a few issues highlighted which, based on our extensive experience, we can share insights on.

SaaS integration risks in GenAI applications

Opet’s letter identifies how modern SaaS integration patterns erode security boundaries. We see this challenge magnified in the GenAI space, where organizations rush to integrate language models into their applications without proper security architecture. The software supply chain risks described manifest clearly in how companies implement language model capabilities – prioritizing features over security fundamentals.

Key findings from LLM application testing

LLM vulnerabilities don’t exist in isolation. Traditional weaknesses combine with GenAI-specific issues to create new attack chains. Media focus remains on hallucinations and jailbreaking LLMs to produce CBRN (Chemical, Biological, Radiological, and Nuclear) content, while organizations integrating GenAI into their own solutions face different risks – primarily prompt injection attacks that enable social engineering, data exfiltration, and denial of service. Security teams often test LLM behavior and application security separately, missing critical attack patterns.

SaaS security challenges

At a practical level, there are two very common areas where SaaS applications fail to provide adequate security. The first is gating single sign-on functionality behind additional cost or the “enterprise” price plans, forcing users to make a trade-off between adequate identity security and cost. The second is comprehensive, high-fidelity audit logging, which is often also gated behind expensive plans or add-ons, if available at all. These limitations hinder an organization’s ability to prevent, detect, and respond to attacks against their SaaS estate.

We hope that SaaS vendors see this open letter as a call to arms and work towards providing a hardened, secure-by-default experience to their consumers.

How Reversec can help

We are here to support organizations in determining and quantifying the risks posed by their SaaS applications, and can assess/audit how they’re deployed and configured to ensure they’re hardened to an appropriate level.

Based on our work with enterprise clients actively integrating GenAI features into their products, we’ve developed practical security solutions that address the real-world risks in LLM applications. These tools emerged from direct collaboration with development teams facing these challenges daily, and we’re excited about the potential for joint initiatives that can drive innovation and create value for all.

  • Spikee: Open-source tool for testing LLM application resilience against security threats.
  • LLM Application Security Canvas: Our framework implements four critical rules:
    • Treat the LLM as untrusted: Applications must be designed with the assumption that LLMs can be manipulated by attackers and should never implicitly trust their outputs for sensitive operations.
    • Validate LLM outputs: Block dangerous content like unauthorized URLs, JavaScript, and markdown images. Detect specific harm categories including hate speech, violence, and self-harm. Use real-time hallucination and off-topic checks to prevent model drift.
    • Validate LLM inputs: Block common jailbreak/prompt injection patterns using machine learning methods, including semantic search with embeddings, specialized classifiers, and LLM-as-judge techniques. Apply topical guardrails and semantic routing to ensure queries match intended scope. Implement instruction/data separation through spotlighting techniques.
    • Implement adaptive content moderation: Dynamic moderation systems identify and block malicious patterns, while suspending accounts that repeatedly attempt exploitation.
Webinars

Building secure LLM apps into your business

Read more
Building secure LLM apps into your business
Webinars

Building secure LLM apps into your business

Watch now
April 11, 2024

Related content

Our thinking

Introducing Reversec: Shaping the future of offensive cybersecurity

April 28, 2025
Introducing Reversec: Shaping the future of offensive cybersecurity
Our thinking

Prompt injections could confuse AI-powered agents

May 17, 2024
Prompt injections could confuse AI-powered agents
Whitepapers

A risk-based formula for security testing

May 25, 2024
A risk-based formula for security testing