Growth Frameworks

Building Resilience in an AI World: Cybersecurity Strategies for Growth-Stage Companies

As AI adoption accelerates, so do the risks. Here’s how growth companies can strengthen their cybersecurity posture without slowing innovation.

Across many growth stage companies, the pace of AI development and implementation continues to accelerate. However, in the rush to adopt AI, organizations may underestimate the related cybersecurity risks. AI tools can expose companies to new forms of attack, often without delivering the enterprise-grade cybersecurity protections that leaders have come to expect from traditional SaaS solutions. As AI becomes increasingly embedded in core operations, we believe it’s essential for growth-stage companies to evaluate their cybersecurity posture through a modern lens. In our view, this means adhering to established cybersecurity best practices internally, as well as ensuring that any third-party AI tools and vendors meet those same standards.

In the sections that follow, we outline three emerging cybersecurity vulnerabilities unique to AI and five practical steps to help strengthen your defenses. Taken together, we feel these represent a playbook for navigating AI innovation without sacrificing security.

Vulnerabilities

AI tools have the potential to expose new attack vectors and meaningfully expand the attack surface. Leaders who fail to recognize these threats risk exposing their companies to attacks that are designed to exploit new gaps in their security perimeter. In our view, awareness is the first step in designing defenses that can keep pace with AI’s rapid adoption.

1. Prompt injection turns inputs into attack surfaces

LLMs take instructions in the form of a prompt and generate a response based on those instructions. Additional context in the form of PDFs, images, website content and software pull requests can also be passed along with the prompt. While often helpful to steer the model’s response, this additional context can expose the model to prompt injection attacks. Prompt injection attacks typically present in the form of hidden or malicious instructions that can manipulate the model’s behavior and dramatically change the output. Prompt injection attacks use a variety of tactics, including text added in tiny clear fonts or hidden in images. The malicious text might instruct the model to disregard original directives and override other safeguards. For example, we’ve seen instances where instructions in pull requests for open-source code repositories have resulted in coding assistants posting files from private code repositories to public locations. These examples highlight a broader truth: any content passed into AI systems—whether text, images or code—can serve as an attack surface, making rigorous input validation and content controls essential.

2. Code volume grows faster than review capacity

Many headlines trumpeting the volume of AI-generated code, in our view, miss an important point: all of this code still requires review and maintenance. Peer code reviews have, in our experience, long been a best practice in software development, and they continue to play an important role in the development process. Today, many human reviewers assume that AI assistants have incorporated cybersecurity best practices in code, but in practice across companies within portfolio and beyond this has frequently not been the case.

Compounding this vulnerability, AI coding assistants tend to produce verbose and generic code that may not be tailored to the specific context and security practices of the environment in which it will be used. While AI-generated code may not be inherently less secure than code written by human developers, the sheer volume can present a challenge for the peer review process. Human reviewers can struggle to perform the thorough review needed, and offloading code review to other AI tools, while appealing on the surface, can ultimately amplify the problem. We believe the lesson is clear: AI-generated code should accelerate development, not replace established review practices. Without disciplined oversight, the sheer volume of code can overwhelm teams and quietly erode security.

3. AI agents widen access to sensitive systems

AI agents can be created to perform many valuable tasks for individuals and organizations, including assisting with emails and meeting preparation, writing code and executing developer workflows, answering customer service questions and more. To perform these tasks, AI agents often require access to sensitive systems such as email, code repositories or CRMs, making them targets for malicious actors seeking to gain access and exfiltrate data or otherwise control their behavior. As organizations experiment with agentic workflows, they should weigh productivity gains against the heightened exposure of sensitive systems, and design safeguards accordingly.

Mitigants

The vulnerabilities explored above demonstrate how traditional security assumptions often fall short in the context of AI. Below, we share five steps that, while not exhaustive, can form a solid foundation on which to build a stronger, safer AI strategy.

1. Apply least privilege by default

Limiting system access and scope for action is an important step organizations can take to protect themselves from AI attacks. We believe read-only access should be the default, and identities used by AI to authenticate should have narrow access to the specific applications necessary for their functionality. Shell access, which gives access to a machine’s operating system, should be avoided. In agentic workflows, developers should constrain the set of tools an agent can call and establish rate limits for actions to help minimize the impact of rogue behavior. To further isolate risks, developers should consider setting up chains of agents, each with small discrete tasks and each allowing for the narrowest set of privileges possible. This principle of “least privilege” has long been a cybersecurity best practice; in the AI era, we believe it becomes indispensable.

2. Scan everything going in and out of LLMs

Whenever possible, we believe any content sent to an LLM should first be scanned for sensitive data such as personal information, credentials and trade secrets to prevent these data from leaving an organization. Likewise, LLM outputs should be scanned and validated before reaching users or other applications.

Technologists have a long history of identifying and remedying vulnerabilities. Based on industry practices and published research, certain mitigation strategies—such as those targeting SQL injection vulnerabilities—are widely used to help reduce security risks. However, validation is typically more difficult in a world of LLMs, since text instructions don’t necessarily have the syntax signatures that coding languages do. In our experience, semantic filters and string-checking can provide some defense, and network security policies further protect the organization. While no filter or strategy is perfect, layering multiple validation methods can reduce the likelihood of malicious instructions slipping through undetected.

3. Keep humans in the loop for critical decisions

Humans should remain the gatekeepers for high-impact actions such as authenticating into sensitive systems, writing to or deleting from production environments and any actions with material financial or operational implications. If the volume of reviews is high, however, humans may become desensitized to risks and less thorough in their review, so careful system design and incentive structures are essential. So-called YOLO (“you only look once”) settings, where a developer enables auto-approval and allows the AI agent to bypass manual confirmation before taking action, should be avoided entirely. In short, we believe humans must remain in the loop for critical decisions and that automation without accountability is an unacceptable risk.

4. Track every action and watch for anomalies

We believe organizations should log all activity generated by AI tools and use a SIEM tool for correlation and anomaly detection. DNS monitoring should be enabled to help detect suspicious domains with which an AI agent may try to communicate. While logging and monitoring alone cannot prevent malicious activity, they can help organizations respond quickly to any issues that may arise. Teams should also use SAST and DAST tools as part of a strategy to prevent insecure code entering production applications. Effective logging doesn’t just catch problems; it creates an audit trail that helps organizations respond quickly, learn from incidents and continually strengthen defenses.

5. Trust no vendor without proof of security

In the race to bring AI products to market, security features may be unacceptably low on many vendors' priority lists; this can be true for established technology companies and startups alike. We see many incorporating MCP, a standardized interface that is designed to allow AI agents to interact with various tools and resources. However, MCP does not currently have adequate mechanisms for separating content from instructions for the LLM. As a result, bad actors can weaponize content to compel AI agents to perform unauthorized actions. Buyers should scrutinize the system access and the scope of actions that MCP servers and other AI tools can perform. Tools with little or no access to internal resources may pass with a lighter review; anything interacting with core systems and data should be screened much more thoroughly. Buyers should be skeptical and perform their own in-depth security and data privacy reviews of both the vendor and any sub-processors, including underlying LLM providers. Reviewing the vendor’s software development team can provide clues; those with prior history building secure applications are more likely to bring a security mindset to new projects. In short, we believe you should treat every AI vendor as unproven until verified and never outsource your security standards to theirs.

Growth Timeline

No items found.

Don't delete this element! Use it to style the player! :)

Cae Keys
Truemuzic
Thumbnail
Play
0:00
0:00
https://interests.summitpartners.com/assets/DHCP_EP9_FutureHealthCare_DarrenBlack-2.mp3

AI opens new avenues for inbound attacks where malicious payloads enter an organization, as well as on the outbound side with data exfiltration. We believe the rapid pace of change means that the types of formalized structures that secure the internet and SaaS applications do not yet exist for AI. In our view, the organizations that will thrive are those that approach AI with both ambition and caution, adapting time-tested security principles to new technologies and making cybersecurity a boardroom-level priority.

Related Experience

The content herein reflects the views and opinions of Summit Partners and is intended for executives and operators considering partnering with Summit Partners. The information herein has not been independently verified by Summit Partners or an independent party. In recent years, technological advances have fueled the rapid growth of artificial intelligence (“AI”), and accordingly, the use of AI is becoming increasingly prevalent in a number of sectors. Due to the rapid pace of AI innovation, the broadening scope of potential applications, and any current and forthcoming AI-related regulations, the depth and breadth of AI’s impact - including potential opportunities – remains unclear at this time.

Inferences herein to “expertise” or any party being an “expert” or other particular skillsets are based solely on the belief of Summit Partners and are provided only to indicate proficiency as compared to an average person. Such inferences should not be construed or relied upon as an indication of future outcomes.

Information herein is as of September 15, 2025.

Definitions

“DNS” refers to Domain Name System, unless otherwise noted.

“SIEM” refers to Security Information and Event Management, unless otherwise noted.

“SAST” refers to Static Application Security Testing, unless otherwise noted.

“DAST” refers to Dynamic Application Security Testing, unless otherwise noted.

“MCP” refers to Model Context Protocol, unless otherwise noted.

Reference resources:

  • Vibe talking: Dan Murphy on the promises, pitfalls and insecurities of vibe coding (Invicti blog)
  • MCP Horror Stories: The GitHub Prompt Injection Data Heist (Docker blog)
  • SQL injection (Wikipedia)
  • https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/ (Simon Willison’s blog)

Stories from the Climb

At Summit, it’s the stories that inspire us – the problems being solved and the different paths each team takes to grow a business. Stories from the Climb is a series dedicated to celebrating and sharing the challenges of building a growth company. For more Stories and other Summit perspectives, please visit our Growth Company Resource Center.

Get the Latest from Summit Partners

Subscribe to our newsletter to stay up to date on our partners, portfolio, and more.

Thank you for subscribing. View the latest issue of The Ascent, or follow Summit Partners on LinkedIn for the latest news and content.
We were not able to submit your form. Please try again.