The pace of AI development and implementation continues to accelerate.1 However, in the rush to adopt AI, organizations may underestimate the related cybersecurity risks. For asset allocators and asset managers – as well as for the companies in which they invest – AI tools can expose companies to new forms of attack, often without delivering the enterprise-grade cybersecurity protections that leaders have come to expect from traditional SaaS solutions. As AI becomes increasingly embedded in core operations, and regulatory guidance around governance and security remains nascent, we believe it’s essential for companies to evaluate their cybersecurity posture through a modern lens. In our view, this means adhering to established cybersecurity best practices internally, as well as ensuring that any third-party AI tools and vendors meet those same standards.
While we recognize that an organization’s exposure to AI-driven cybersecurity risks may vary in practice based on its adoption and integration of AI tools, in the sections that follow, we outline three emerging vulnerabilities unique to AI and five practical steps to help strengthen cyber defenses. Together, we believe these offer a playbook designed to help teams navigate AI innovation without sacrificing security.
Vulnerabilities
AI tools have the potential to expose new attack vectors and meaningfully expand the attack surface. Leaders who fail to recognize these threats risk exposing their companies to attacks that are designed to exploit new gaps in their security perimeter. In our view, awareness is the first step in designing defenses that can keep pace with AI’s rapid adoption.
1. Prompt injection turns inputs into attack surfaces
LLMs take instructions in the form of a prompt and generate a response based on those instructions. Additional context in the form of PDFs, images, website content and software pull requests can also be passed along with the prompt. While often helpful to steer the model’s response, this additional context can expose the model to prompt injection attacks. Prompt injection attacks typically present in the form of hidden or malicious instructions that can manipulate the model’s behavior and dramatically change the output. Prompt injection attacks use a variety of tactics, including text added in tiny clear fonts or hidden in images. The malicious text might instruct the model to disregard original directives and override other safeguards. For example, we’ve seen instances where instructions in pull requests for open-source code repositories have resulted in coding assistants posting files from private code repositories to public locations. In our view, content passed into AI systems - whether text, images or code - can potentially increase the system’s exposure to security vulnerabilities. As such, we recommend organizations continue to prioritize rigorous input validation and content controls as part of their broader risk management practices.
2. Code volume grows faster than review capacity
Many headlines trumpeting the volume of AI-generated code, in our view, miss an important point: all of this code still requires review and maintenance. Peer code reviews have, in our experience, long been a best practice in software development, and they continue to play an important role in the development process. Today, many human reviewers assume that AI assistants have incorporated cybersecurity best practices in code, but in practice this is not always the case.
Compounding this vulnerability, AI coding assistants tend to produce verbose and generic code that may not be tailored to the specific context and security practices of the environment in which it will be used. While AI-generated code may not be inherently less secure than code written by human developers, the sheer volume can present a challenge for the peer review process. Human reviewers can struggle to perform the thorough review needed, and offloading code review to other AI tools, while appealing on the surface, can ultimately amplify the problem. In our view, AI-generated code can enhance development efficiency, but it is not a substitute for established review practices. Without oversight, we believe the volume of code generated may pose operational and security challenges for some organizations.
3. AI agents widen access to sensitive systems
AI agents can be created to perform many valuable tasks for individuals and organizations, including assisting with emails and meeting preparation, writing code and executing developer workflows, answering customer service questions and more. To perform these tasks, AI agents often require access to sensitive systems such as email, code repositories or CRMs, making them targets for malicious actors seeking to gain access and exfiltrate data or otherwise control their behavior. As organizations experiment with agentic workflows, it is prudent to weigh productivity benefits alongside possible risks to sensitive systems. We believe incorporating these considerations into risk management practices help inform responsible adoption.
Mitigants
The vulnerabilities explored above demonstrate how traditional security assumptions often fall short in the context of AI. Below, we share five steps that, while not exhaustive, can form a solid foundation on which to build a stronger, safer AI strategy.
1. Apply least privilege by default
In our view, limiting system access and scope for action is an important step organizations can take to protect themselves from potential AI attacks. We believe read-only access should be the default, and AI-related authentication should be limited to the specific applications necessary for functionality. Shell access, which permits control over a machine’s operating system, may present heighted risk and should be avoided. In agentic workflows, developers might consider constraining the set of tools an agent can call and setting rate limits for actions to help minimize the potential impact of rogue behavior. While this principle of “least privilege” has long been a cybersecurity best practice, in the AI era, we believe its relevance becomes more pronounced.
2. Scan everything going in and out of LLMs
Whenever possible, we believe any content sent to an LLM should first be scanned for sensitive data such as personal information, credentials and trade secrets to prevent these data from leaving an organization. Likewise, organizations should consider scanning and validating LLM outputs before these outputs reach users or other applications.
Technologists have a long history of identifying and remedying vulnerabilities. Based on industry practices and published research, certain mitigation strategies—such as those targeting SQL injection vulnerabilities—are widely used to help reduce security risks. However, in our view, validation is typically more difficult in a world of LLMs, since text instructions don’t necessarily have the syntax signatures that coding languages do. In our experience, semantic filters and string-checking can provide some defense, and network security policies can further protect an organization. While no filter or strategy is perfect, in our view, layering multiple validation methods can help reduce the likelihood of malicious instructions slipping through undetected.
3. Keep humans in the loop for critical decisions
We strongly believe that humans should remain the gatekeepers for high-impact actions such as authenticating into sensitive systems, writing to or deleting from production environments and any actions with material financial or operational implications. If the volume of reviews is high, however, humans may become desensitized to risks and less thorough in their review, so careful system design and incentive structures are essential. In our view, YOLO (“you only look once”) settings, where a developer enables auto-approval and allows the AI agent to bypass manual confirmation before taking action, should be avoided entirely. In short, we believe humans must remain in the loop for critical decisions and that
4. Track every action and watch for anomalies
We believe organizations should log activity generated by AI tools and use a SIEM tool for correlation and anomaly detection. Enabling DNS monitoring may help detect suspicious domains with which an AI agent may try to communicate. While logging and monitoring alone cannot prevent malicious activity, they can help organizations respond quickly to any issues that may arise. Organizations could also consider using SAST and DAST tools as part of a strategy to help prevent insecure code from entering production applications. Effective logging creates an audit trail that can help organizations respond quickly, learn from incidents and strengthen defenses.
5. Trust no vendor without proof of security
Based on our experience, security features may be low on many vendors' priority lists. We see many vendors incorporating MCP, a standardized interface that is designed to allow AI agents to interact with various tools and resources. However, in our view, MCP does not currently have adequate mechanisms for separating content from instructions for the LLM. As a result, bad actors can weaponize content to compel AI agents to perform unauthorized actions. Buyers should scrutinize the system access and the scope of actions that MCP servers and other AI tools can perform. In our view, tools with little or no access to internal resources may pass with a lighter review; tools interacting with core systems and data should be screened much more thoroughly. Buyers should be skeptical and perform their own in-depth security and data privacy reviews of both the vendor and any sub-processors, including underlying LLM providers. Reviewing the vendor’s software development team can provide clues; those with prior history building secure applications may be more likely to bring a security mindset to new projects. In our view, organizations should approach AI vendors with careful diligence, recognizing that each provider’s security standards may vary. We recommend that companies review each vendor independently and avoid relying solely on a third party’s assurances when it comes to meeting internal security expectations.
Growth Timeline

January 1, 2024
Acquired by Vista Equity Partners
Rebrands as InvoiceCloud
Began trading on the New York Stock Exchange under the ticker symbol KVYO on September 20, 2023
January 1, 2023
Launched Klaviyo Customer Data Platform (CDP) and reviews - Surpassed 130,000 customers
January 1, 2022
Entered into a strategic partnership with Shopify, including capital investment - Launched partnership with Wix and completed first acquisition, Napkin.io - Opened Sydney office
January 1, 2021
Completes IPO on September 23 (NYSE: ESMT)

January 1, 2020
Rebranded to EngageSmart
Introduced support for Apple Pay, Google Pay

January 1, 2017
Entered the wellness vertical with the acquisition of SImplePractice.
January 1, 2021
Raised additional capital in a funding round led by Sands Capital - Launched SMS product - Announced native integration with Prestashop and partnership with WooCommerce
January 1, 2020
Raised approximately $200 million in new capital from Summit Partners and Accel
January 1, 2019
Raised approximately $150 million in capital from Summit Partners Opened London office

January 1, 2009
InvoiceCloud founded
Focused on local government and utility verticals
January 1, 2018
Surpassed 10,000 customers
January 1, 2017
Launched a partnership with BigCommerce
June 1, 2016
Surpassed 1,000 customers
January 1, 2016
Raised new capital in a funding round led by Astrial Capital
January 1, 2015
Received SAFE financing led by Accomplice
January 1, 2014
Surpassed 100 customers
January 1, 2012
Klaviyo founded
January 1, 2021
Completes IPO (NASDAQ: LFST) on June 10
January 6, 2020
Announces majority recapitalization
January 1, 2020
LifeStance completes 50th acquisition. With COVID onset, transitioned from 300 telepsych visits per week to more than 40,000
January 1, 2020
2.3M patient visits, 370 centers and 3,000+ clinicians
January 1, 2019
1.4M patient visits, 170 centers and 1,400 clinicians
January 1, 2018
930k patient visits, 125 centers and 800 clinicians
January 1, 2017
LifeStance founded with backing from Summit Partners and Silversmith Capital Partners
January 1, 2019
Launced charity streaming - live streaming fundraising
General Atlantic invests alongside Summit and management team

January 1, 2018
Entered the non-profit vertical with the acquisition of DonorDrive
Introduced and integrated telehealth solution

January 1, 2015
Summit Partners invests
Entered the healthcare vertical with the acquisition of HealthPay24
Don't delete this element! Use it to style the player! :)
.png)
AI tools can increase a company’s exposure to inbound attacks, such as the delivery of malicious payloads―¬as well as to outbound risks like data exfiltration. We believe the rapid pace of change means that the types of formalized structures that secure the internet and SaaS applications do not yet exist for AI. In our view, the organizations that will thrive are those that approach AI with both ambition and caution, adapting time-tested security principles to new technologies and making cybersecurity a boardroom-level priority.
Related Experience
* There can be no assurance that the performance of any such professional serves as an indicator of future performance. There is no guarantee that Summit's investment professionals will successfully implement the Summit funds’ investment strategy. A complete list of Summit employees is available upon request.
(1) Source: Microsoft, AI-powered success –with more than 1,000 stories of customer transformation and innovation. July 24, 2025. https://www.microsoft.com/en-us/microsoft-cloud/blog/2025/07/24/ai-powered-success-with-1000-stories-of-customer-transformation-and-innovation/
Definitions:
- “DNS” refers to Domain Name System, unless otherwise noted.
- “SIEM” refers to Security Information and Event Management, unless otherwise noted.
- “SAST” refers to Static Application Security Testing, unless otherwise noted.
- “DAST” refers to Dynamic Application Security Testing, unless otherwise noted.
- “MCP” refers to Model Context Protocol, unless otherwise noted.
Reference resources:
- Vibe talking: Dan Murphy on the promises, pitfalls and insecurities of vibe coding (Invicti blog)
- MCP Horror Stories: The GitHub Prompt Injection Data Heist (Docker blog)
- SQL injection (Wikipedia)
- https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/ (Simon Willison’s blog)
About Summit Partners
Summit Partners is a leading growth-focused investment firm, investing across growth sectors of the economy. Today, Summit manages more than $45 billion in capital and targets growth equity investments of $10 million – $500 million per company. Since the firm’s founding in 1984, Summit has invested in more than 550 companies in the technology, healthcare and life sciences, and growth products and services sectors.





