Artificial Intelligence

Welcome to the GPT Store – and Its Three Million Security Uncertainties


OpenAI launched its GPT Store (Store) in early January 2024 to “help you find useful and popular custom versions of ChatGPT.” For companies, it promises new productivity with more AI-powered applications that employees can use with the organization’s own datasets, while enjoying the familiarity and built-in capabilities of ChatGPT.

For one set of employees – companies’ overworked technology compliance teams and cybersecurity engineers – the arrival of a third-party app store full of generative pre-trained transformers (GPTs or apps) poses an unnerving challenge. In three months, OpenAI claims, the Store has accumulated three million custom GPTs based off its large language model (LLM).

The new GPTs can link a series of actions for users to “allow GPTs to integrate external data or interact with the real-world [sic],” Open AI highlighted. That means that the user’s data often will travel to a developer’s third-party server. In a sample of the most downloaded productivity GPTs, roughly a quarter provide the option to upload files, Harmonic Security CEO Alistair Paterson told the Cybersecurity Law Report.

The Store provides “a thin veneer of legitimacy around some of these applications, but fundamentally, the store’s not really checking where the data is going and how it’s being secured,” Paterson said. Now that many companies use ChatGPT’s Enterprise version, their employees could get a false sense of security about the legitimacy of the new apps that work with it. “It is in no way like Apple’s App Store ecosystem, where there is a very heavyweight check that apps have to go through before they get listed,” he cautioned.

This article details the top security risks of the Store and identifies key priorities for compliance professionals and company engineers as they confront the risks of the new GPT app ecosystem. It also suggests resources for cyber compliance professionals to monitor security concerns around LLMs more broadly.

See “Dos and Don’ts for Employee Use of Generative AI” (Dec. 6, 2023).

The Short Path to a Store With Millions of GPTs

In 2023, a few months after ChatGPT emerged publicly, OpenAI allowed outside developers to offer plugins as add-on tools to it. These extensions allowed users to go beyond the main trained LLM to fetch real-time data or information on the web, then perform a chain of actions. By the end of 2023, almost 1,000 plugins were available for the platform.

Developers told OpenAI it was easier to build standalone GPTs trained on its LLM than to create plugins. At the end of 2023, OpenAI shifted direction. It encouraged outsiders to build self-contained GPTs preloaded with its DALL-E, Code Interpreter and ChatGPT features, setting the Store in motion. OpenAI will shut down all “conversations” using plugins on April 9, 2024.  

The arrival of standalone GPTs will let developers in different industries leverage the LLM for structured tasks specific to its business operations. The Store offers “custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills.” These will allow actions “such as connecting GPTs to databases, plugging them into your emails, or making them your shopping assistant,” the Store highlights.

See “Apple Overhauls Privacy for iPhone Apps, but Will It Enforce Its Policies?” (Sep. 23, 2020).

The GPT Store’s Structure and Its Appeal to Business

The Store’s approach is similar to Apple and Google’s lucrative mobile app stores. It displays leaderboards and curated picks. It offers developers a payout based on how many people use their GPT. “We believe the most incredible GPTs will come from builders in the community,” OpenAI says. 

Developers, including those at companies, can set a GPT’s visibility to “only me,” “anyone with a link,” or “everyone.” Some GPTs will work only with OpenAI’s three paid tiers, called Teams; Plus; and Enterprise.

One difference from the marketplaces for mobile apps or smart speaker apps is that, with the GPT market, businesses are expecting to gain powerful new productivity tools with AI.

The Store’s Low Barrier to Entry

OpenAI’s GPT Builder lets most anyone program a GPT. A GPT maker uses regular language, rather than computer code, to tell the tool the desired capabilities and actions for the new GPT, and then it spits out the GPT to enact those functions.

OpenAI emphasizes the Store’s benefits for ChatGPT Enterprise customers, saying its GPT builder tool will “empower users inside your company to design internal-only GPTs without code and securely publish them to your workspace. The admin console lets you choose how GPTs are shared and whether external GPTs may be used inside your business.”

All creators must verify their profiles, but it is possible that, along with trained engineers, Instagram influencers, YouTube stars and scam artists will make a GPT.

“We use a combination of automated systems, human review and user reports to find and assess GPTs that potentially violate our policies,” OpenAI says. A violation “can lead to actions against the content or your account, such as warnings, sharing restrictions or ineligibility for inclusion in GPT Store or monetization.”

Since January 2024, the Store’s process for vetting creators remains unclear, Paterson reported.[1] “I am incredibly bullish on the overall AI revolution for businesses. There are a ton of exciting applications, but I am not sure that the GPT Store is going to be a significant part of that,” he opined. So far, “it looks like mostly a series of thin wrappers on top of a groundbreaking technology, which is ChatGPT. But these wrappers are not adding a ton of value on top. The value likely is going to come from the more thought-through, independent applications that are now coming out for enterprise every day,” he observed.

Practically, though, employees will find many of the new external GPTs compelling. “You can automate the boring aspects of your job and potentially make yourself look better and faster to your boss,” Paterson pointed out.

Thus, businesses soon will have to set fresh guidelines about the Store and its many GPTs, decide whether to permit use of any Store GPTs, and educate employees. To do that, businesses will need to understand the extent of the Store’s risks.

See “Go Phish: Employee Training Key to Fighting Social Engineering Attacks” (Aug. 9, 2023)

OpenAI’s Developer Rules

OpenAI has taken some steps to require GPT developers to protect privacy, security and safety, but observers see notable gaps and flaws in its rules three months in.

OpenAI Flags Privacy

OpenAI lists privacy first in its policies for builders and prevents them from accessing users’ chat with their GPT. But some user data still will travel. If a GPT is programmed to use third parties’ application programming interfaces (API, a site’s code gateway for secure communication), the developer selects whether the user’s chat can be sent to each API.

OpenAI’s rules direct GPT makers to not compromise the privacy of others, including “collecting, processing, disclosing, inferring or generating personal data without complying with applicable legal requirements.” Nor are GPTs supposed to solicit “sensitive identifiers, security information, or their equivalents: payment card information (e.g., credit card numbers or bank account information), government identifiers (e.g., SSNs), API keys, or passwords.”

OpenAI also forbids makers from using facial recognition or other biometric systems for assessment or identification. Unauthorized monitoring of individuals is likewise banned.

See “Checklist for Framing and Assessing Third-Party Risk” (Aug. 16, 2023).

Less Guidance and Controls on GPTs’ Security

OpenAI’s rules’ attention to cybersecurity is milder. For each action a GPT takes involving a party, the developer must supply a privacy policy URL and decide whether to require authentication with API Key or OAuth, but the developer also can select “none.”

One piece of good news is that OpenAI’s centralized process and easy-to-use app creation should reduce some of the “shadow AI” that festered throughout 2023, Paterson said. Outside of OpenAI, over 10,000 third-party apps popped up to offer appealing generative AI uses, sometimes building on ChatGPT’s capabilities. Many had poor functionality and dubious security and privacy measures, he noted.

The Store does not yet offer much documentation on cybersecurity, observers have lamented. OpenAI seems to have a team working on it, Paterson said, so that may change.

See “Navigating NIST’s AI Risk Management Framework” (Nov. 15, 2023).

Prohibitions on Cheating, “Jailbreaking” and IP Circumvention

OpenAI’s rules for GPT makers highlight its many other headline controversies more than security risks. Its prohibitions reflect the copyright lawsuits that OpenAI is fighting, deepfakes, political disinformation and widespread attention to ChatGPT’s inappropriate output, like abusive content, sexual material, and discriminatory automated decisions.

Plenty of GPTs in the Store raise copyright concerns, Paterson noted. Some let users speak in the voice of trademarked characters or create art in the style of copyrighted material.

Other GPTs on offer purport to let users prompt ChatGPT to generate nasty or violative output that the LLM is programmed to avoid. Some Store GPTs also circumvent integrity tools such as plagiarism detectors and AI content detectors like Originality.ai.

See our two-part series on managing legal issues arising from use of ChatGPT and Generative AI: “E.U. and U.S. Privacy Law Considerations” (Mar. 15, 2023), and “Industry Considerations and Practical Compliance Measures” (Mar. 22, 2023).

Security Risks for the GPT Store

Lack of Security By Design

“The way that OpenAI designed its platform had very obvious issues,” Washington University security researcher Umar Iqbal told the Cybersecurity Law Report.

The ecosystem’s architecture is insecure. “Running a lot of apps in the same execution space is not a thing anymore in modern computer security – when a platform opens an app, it sandboxes it using some virtual restrictions. OpenAI did not do that,” Iqbal noted.  

OpenAI has left it possible, when users run multiple GPTs, that the GPTs can steal and overwrite each others’ stored files, Iqbal noted. He cited another researcher’s report of this vulnerability (in February 2024) with OpenAI’s Code Interpreter, which employees will want to use to analyze data and files written in code. The thefts could include the user’s conversations with ChatGPT, the report warned.

The “multi-tenant” environment and sharing of computing power and infrastructure that OpenAI has established is a general risk, Paterson added, “but that is probably the only way the GPT Store would ever work economically.”

See “Innovation and Accountability: Asking Better Questions in Implementing Generative AI” (Aug. 2, 2023).

Third-Party Interactions and Chances for Stolen Credentials

The arrival of GPTs that perform a chain of actions will require increasing interactions across websites and dependency on third parties, often running multiple apps simultaneously, Iqbal said.

This poses “a huge third party risk management challenge that companies have now on their hands,” Paterson observed.

Whitelisting GPTs would require labor-intensive vetting. For example, one popular GPT in the Store currently analyzes PDFs, requiring users to upload their files, which goes to a third-party site, Paterson noted. That site does not say it is GDPR, ISO 27001 or Soc2 compliant, nor does it indicate that it has “any real security program around the data,” Paterson reported.

A GPT performing actions across sites and apps will need authentications to enter multiple user accounts. That makes it plenty appealing for malicious GPT operators, noted a report from Salt Security. The report provided examples of the relative ease of stealing security tokens and passwords in the process.

See “Checklist for Building an Identity-Centric Cybersecurity Framework” (Nov. 3, 2021).

Unintentional Conflicts and Data Leaks

GPT apps “can cause inadvertent security, privacy, and safety issues” because their instructions sometimes redirect the LLM, Iqbal noted. On March 7, 2024, during the FTC’s PrivacyCon, Iqbal presented research evidence that vulnerabilities with ChatGPT plugins persisted throughout 2023.

One inadvertent problem Iqbal found involved personal data. For example, both a medical appointment GPT and a travel reservation GPT will gather personal information to complete its task, but “the LLM can get confused about what is the personal data needed by each of the apps,” and inadvertently upload sensitive health data into the travel app, he reported.

With the lack of isolation in the ChatGPT environment, plugins in 2023 also hijacked other apps’ sessions. An initial instruction “sometimes persists beyond the context of using that app,” Iqbal said. In one example, an app switched the language for the LLM’s response, and then the LLM continued to use that language in other apps, deviating from the user’s usual language.

The Risks of Natural Language

Generative AI’s appeal lies in the flexibility of natural language inputs, but those can be vague compared to the engineered code in web and mobile ecosystems. “The natural language interface of a GPT is an additional attack vector, because of the ambiguity and imprecision,” Iqbal cautioned.

Natural language made it easier to confuse the LLM, noted University of Washington assistant professor Franziska Roesner, who collaborated on the research with Iqbal. One app could affect other apps with certain instructions. In one travel app, “the natural language description said something in capital letters like ‘always use this for travel-related queries.’” With two similar apps installed, “ChatGPT picked the one that yelled at it, basically,” she said.

OpenAI has responded to alerts of vulnerabilities, but Iqbal has spoken to developers who reported vulnerabilities with their plugins that were not remedied. “It has been only three months since GPTs have launched and there are millions of GPTs. Similar to how OpenAI is moving fast on the innovation, they also need to move fast for securing their systems. Which is not as fast as that innovation at the moment,” Iqbal contended.

See “A 2023 Cyber Regulation Look-Back and 2024 Risk-Management Strategies” (Dec. 13, 2023).

Vetting and Procurement Steps

Revisit Gen AI Policies

Companies spent 2023 grappling with use policies for ChatGPT and other LLMs. The arrival of many narrow third-party GPTs is a shift from the previous focus on a few dominant LLMs, Roesner noted. “Now is definitely the time to think about how to vet these apps, how to design the APIs, how to think about permissions for user data,” she said.

“The only way enterprises can realistically manage [the array of threats] is to get the enterprise edition of ChatGPT,” and use its security settings, Paterson opined.

To address any employee interest in the GPT Store, or GPT apps independently available, run a fresh exercise with departments to identify their latest use cases for AI, Paterson suggested. “Then you can come up with some standardized, structured ways of meeting the business’s needs” and prevent individuals from using an assortment of unvetted GPTs, he said.

Companies might remind employees to be cautious about using sensitive material (business or personal) with any newer, “smaller” GPTs derived from ChatGPT, Paterson recommended. Meanwhile, organizations should take technical steps to verify “that sensitive data isn’t getting out of the business” and maintain ways to see “which applications are being adopted.”

See our two-part series on the practicalities of AI governance: “AI Governance Gets Real: Tips From a Chat Platform on Building a Program,” (Feb. 1, 2023), and “AI Governance Gets Real: Core Compliance Strategies” (Feb. 8, 2023).

Model the Threats to Address Other AI Risks

The GPT Store is not the only vector multiplying LLM risks for enterprises. New AI productivity software on the market promises seamless AI integrations. One product claims that it “allows you to access AI at any time, within any software.” It also features add-on plugins – possibly another layer of interaction between GPTs.

Companies’ cyber teams should undertake a fresh round of threat modeling that incorporates such apps and their possible interactions, recommended Iqbal.

A period of accelerating adoption may be coming and will shift the risks. Companies are still slowly testing Gen AI services and integrations, but “over the next 12, 18 months, they have to be adopting these tools or risk being left behind,” Paterson predicted. Many companies are now vetting Microsoft’s Copilot, he added, which “is a very different thing from the GPT Store, but also has enterprise implications because it is possible for employees to build their own Copilots with sensitive data and then make those Copilots available externally.”

To cope with the evolving risks, compliance teams will want to keep perspective and monitor broader LLM problems. One guide is the OWASP Top 10 for LLM Security. Iqbal also recommended the LLM Security site that collects reports – and some remedies. OWASP’s first-listed security threat is prompt injections, or malicious prompts. “This falls under those categories of attack for which we need research to come with effective solutions,” Iqbal said.

 

[1] OpenAI did not respond to multiple messages that the Cybersecurity Law Report sent to its communication and legal representatives to discuss the Store’s vetting and security measures.

 

Cyber Crime

Checklist Covering CSRB Recommendations on Five Areas for Strengthening Cyber Defenses


A report released by the Cyber Safety Review Board (CSRB) in 2023 (Report) framed five critical spheres for cybersecurity improvement based on weaknesses leveraged by Lapsus$ during attacks carried out in 2021 and 2022. Organizations can use this checklist derived from the Report, and incorporating related commentary from Manatt partner Paul H. Luehr, to strengthen measures in areas the Report highlighted, including identity and access management (IAM), building resilience, mitigating third-party risk, mitigating telecommunications vulnerabilities and addressing law enforcement challenges.

For in-depth coverage on the Report, see our two-part series “CSRB Report on Lapsus$ Attacks”: Key Takeaways and Law Enforcement Cooperation (Sep. 20, 2023), and Moving Beyond MFA, Building Resilience and Mitigating Third-Party Threats (Sep. 27, 2023).

Strengthening Identity and Access Management

  • Move beyond passwords to adopt secure-by-default IAM solutions.[1]
    • Look for technology providers that develop easy-to-use, secure-by-default IAM solutions that do not rely on text-based strings for authentication.
    • Move to Fast Identity Online (FIDO)-supported applications for phishing-resistant authentication (such as FIDO2).[2]
    • Transition away from short messaging service (SMS) and voice multi-factor authentication (MFA).[3]
  • Address theft of identification cookies by checking that software developers, designers and manufacturers have implemented secure-by-default measures that prevent it.
  • Combat social engineering by using phishing-resistant MFA for sensitive transactions, such as accessing customer records, elevating access privileges or conducting a subscriber identity module (SIM) swap.[4]

Building Resilience

  • Identify types and flow of sensitive data by mapping.[5]
  • Consider regular reviews of user activity to identify and address sensitive data, such as location, device, IP address, timing and/or expected behavior.
  • Adopt emerging modern architectures to ensure strong defense against attacks.
  • Ensure cybersecurity programs incorporate best practices[6] by taking the following steps:
    • identify and understand critical IT infrastructure;
    • implement least privilege access practices;
    • undertake robust monitoring with centralized log management;
    • use appropriately tailored zero trust architecture (ZTA) [7] following CISA’s Zero Trust Maturity Model or another appropriate framework;
    • adopt strong authentication; and
    • make it easy for employees to report suspicious activity.
  • Create a carefully tailored incident response plan.
    • Establish and assign clear roles and responsibilities for:
      • notification of law enforcement;
      • response oversight;
      • attack mitigation;
      • legal and regulatory compliance; and
      • public communications.
    • Obtain and validate contact information for relevant law enforcement and government agencies, and other third parties.
    • Create variations of the response plan to address various types of attacks such as ransomware.
    • Develop procedures to handle “swatting” (calling in false public safety concerns to bring emergency responders to a person’s home) and “doxing” (malicious publication of personal information) attacks on employees.
    • Identify and prioritize mission-critical IT infrastructure.
    • Create and test robust backup and restoration processes.
    • Establish information-sharing relationships with industry partners and law enforcement.
    • Ensure that the organization complies with any regulatory reporting requirements.
    • Develop robust internal communications protocols, including out-of-band mechanisms to prevent attackers from monitoring response activities.
    • Implement robust employee training.
    • Routinely test and update the plan.
  • Train employees to recognize social engineering techniques and educate them on the latest threats and how they can report suspected intrusions.
  • Engage staff, executives and the board in regular table-top exercises so they can practice their response to changing threats.
  • Conduct post-incident reviews, including:
    • determine whether any elements of the response plan should be updated or amended;
    • address any legal or regulatory issues that arose during the incident with a view to preventing similar problems from arising in future incidents; and
    • determine whether to provide additional information about the attack to government agencies or trusted third parties.

Mitigating Third-Party Risk

  • Encourage third parties, including business process outsourcing companies (BPOs), to agree upon cybersecurity-related contract terms.
    • Require in the contract with the service provider, particularly a BPO, that its cybersecurity practices meet or exceed those of the organization, including clear provisions for risk management and monitoring.
    • Ensure the contract with the service provider or BPO addresses that party’s:
      • use of strong authentication;
      • employee training;
      • data handling, processing and storage;
      • secure software development lifecycle management;
      • device management; and
      • co-ownership of incident response, including assignment of roles and responsibilities.
    • Include separate privacy provisions that prohibit use of personal data outside the contract, as required by many new state laws.
    • Consider requiring, for highly sensitive transactions, that the service provider or BPO use the organization’s hardware and/or cybersecurity processes.
    • Separate cybersecurity requirements based on the level of access the vendor has to the organization’s network or sensitive data.
    • Require the following in the contract when a vendor has deep and broad access to an organization’s data:
      • written vendor policies based on a national standard like NIST’s cybersecurity framework;
      • annual risk assessments and/or industry certifications;
      • implementation of MFA and complex passwords;
      • regular vulnerability scans and patching;
      • audit rights;
      • prompt reporting and investigation of data incidents;
      • indemnification and payment of all costs associated with a breach by the vendor; and
      • maintenance of cybersecurity insurance to cover such costs.
  • Ensure that third parties have an incident response plan designed to address similar attacks, referencing NIST’s Computer Security Incident Handling Guide (SP 800‑61).[8]
  • Ensure service providers have developed their own relationships with law enforcement and consider how they will engage with information sharing partners during an incident.

Mitigating Telecommunications Vulnerabilities by Preventing Fraudulent SIM Swaps

  • Build resistance to use of social engineering to achieve fraudulent SIM swap by doing the following:[9]
    • permit customers to lock accounts and require a strong multi-layered validation process for unlocking accounts;
    • require strong identity verification by default for all SIM swaps;
    • place tight controls on who can perform SIM swaps;
    • require a waiting period prior to effecting a swap;
    • use additional identification measures, such as a photograph, when the requester does not present strong identification credentials;
    • use video tools for swaps requested online or by phone;
    • provide account holders with a detailed report following a swap;
    • provide routine training to retail employees who perform swaps;
    • conduct robust security checks on employees;
    • limit the collection and sharing of personally identifiable information with employees to that needed for specific transactions;
    • require employees and third parties involved in SIM swaps to complete a strong MFA authentication when submitting a swap request; and
    • track fraudulent SIM swaps, impose business costs on third parties that do not seek to mitigate fraudulent swaps and treat SIM swaps as a crime.
  • Detect and address theft or abuse of point-of-sale devices used for SIM swaps, including:
    • establish the ability to wipe devices remotely;
    • revoke trusted access; and
    • use ZTA and vulnerability scanning in retail stores to prevent new or untrusted devices from joining a network.
  • Assess and harden all applications and programming interfaces used for managing customer accounts.
  • Conduct routine penetration testing and third-party audits.

Addressing Law Enforcement Challenges

  • Improve reporting of cyberattacks to federal responders.
    • Make reports promptly to maximize the ability of government agencies to respond.
    • Have contact information for relevant state and federal agencies, as well as their missions and resources.
    • Understand available protections for organizations that share information about incidents and address concerns about loss of privilege from sharing information.
    • Seek protection for providers of online services that identify evidence of cybercrime in online communications.
    • Clear up misconceptions about whether and when law enforcement agencies report cybersecurity incidents to regulators.
  • Build resilience against use of fraudulent emergency disclosure requests (EDR)[10] by devoting appropriate resources to the authentication of EDR requests.
    • Determine whether to adopt any new mechanisms, such as use of standardized digital signatures.
    • Consider how attackers have used fraudulent EDRs.
    • Assign roles and responsibilities for verifying the legitimacy of EDRs.

 

 

[1] Attackers easily can obtain passwords for access to targets’ systems using simple hacking techniques.

[2] FIDO2-compliant, hardware-backed solutions should be built into consumer devices by default. Developers should leverage standards like WebAuthn and technologies like Passkeys.

[3] If an organization already uses MFA, it should be able to move quickly from weaker voice or SMS push messages to stronger app-based MFA with number matching. It may be 5 to 10 years before FIDO2 devices and solutions completely replace passwords.

[4] A SIM swap is when cybercriminals trick a cellular service provider into switching a victim’s service to the criminals’ SIM card, thereby essentially hijacking the victim’s phone number, usually to exploit two-factor authentication to gain fraudulent access to bank accounts.

[5] The Report mentions the need to identify an organization’s critical infrastructure, but “often it is even more important to identify specific types of sensitive data (e.g., SSNs, health data, strategic plans) that flow through an organization or its vendors,” Luehr opined. “Only by mapping the types, locations, and flow of sensitive information can an organization protect it.”

[6] Each company must consider the cybersecurity of its entire information ecosystem, especially in this age of mobile and cloud-based computing.

[7] ZTA allows an organization to analyze and re-verify users as they move throughout a network, especially when seeking access to sensitive data. To reduce daily friction for computer uses, many organizations have moved to a single-sign-on procedure.

[8] In the event of an incident, they should implement their response plans, notify law enforcement and monitor response communications for unauthorized participants – including the attackers.

[9] The CSRB recognizes that, although measures to prevent fraudulent SIM swaps may add friction to the customer experience, such measures are needed to protect customers’ sensitive information and prevent attackers from using telecommunications companies to gain access to other targets.

[10] Federal law (18 U.S.C. § 2702) permits providers of electronic communications services to disclose information to government entities pursuant to an EDR in the event of “an emergency involving danger of death or serious physical injury to any person.”

SEC Enforcement

SEC’s 2024 Regulatory Focus


“The SEC remains aggressive on rules, on exams, on enforcement,” but the pace of rulemaking has slowed somewhat due to legal challenges, said ACA Group (ACA) global advisory leader Carlo di Florio at the firm’s regulatory outlook program. Di Florio and his ACA colleagues covered top-of-mind regulatory issues for investment advisers and broker-dealers, including the new private fund rules; AI; custody; off-channel communications; anti-money laundering (AML) compliance; Regulation Best Interest (Reg BI) and other compliance concerns for broker-dealers; new regulatory requirements affecting registrants; cybersecurity; marketing; environmental, social and governance (ESG) investment factors; U.K. developments; and compliance technology. This article synthesizes the key takeaways from the program.

See “SEC Director Offers Clarification on New Cyber Disclosure Regime” (Jan. 3, 2024).

Private Fund Rules

In August 2023, the SEC adopted final rules for private fund advisers (Private Fund Rules). Although the rules are being challenged, the SEC is already conducting exams focusing on the substantive provisions of the rules, noted di Florio.

Upcoming Compliance Dates

In September 2024, the Private Fund Rules pertaining to restricted activities, preferential treatment and adviser-led secondary transactions take effect for funds with at least $1.5 billion in private fund assets under management (AUM), said ACA partner Joshua Broaded. Those rules take effect for smaller advisers in March 2025. Additionally, the quarterly statement and private fund audit obligations will take effect for all advisers in March 2025.

Preparing for Compliance

Although there are pending legal challenges to the Private Fund Rules, advisers should not wait to adopt the policies, procedures and controls needed for compliance, Broaded recommended. Advisers should do four things:

  1. Prepare mock-ups of quarterly statements, which may require input from outside service providers. Determine what information is needed for those statements, where it will come from, who will provide it and how it will be compiled in the requisite format.
  2. Review practices for non-pro rata cost allocations, especially private equity dead-deal costs, which will be affected by the Restricted Activities Rule.
  3. Review side letters. The Preferential Treatment Rule may restrict certain practices or require additional disclosures.
  4. Address any firm-specific implications of the rules.

See “2024 SEC Examination Priorities: New Approaches to Old Areas of Concern” (Jan. 17, 2024).

AI

AI is now on the regulatory radar, said di Florio. The SEC is conducting an AI-focused exam sweep, and Chair Gary Gensler has announced an enforcement focus on AI-related fraud. According to Broaded, over the past year, there have been four significant regulatory developments concerning AI:

  1. In July 2023, the SEC proposed the Predictive Data Analytics Rule (PDA Rule), which focuses on conflicts of interest associated with client/investor interactions and investment decision making. The proposal takes an extremely expansive view of the types of technologies and processes that constitute predictive data analytics. The SEC is considering industry comments and monitoring other regulatory activity in the area.
  2. At the same time, the Securities Industry and Financial Markets Association (SIFMA) issued a white paper taking a more conventional approach to AI governance, including:
    • scoping, inventorying and risk-rating AI exposures;
    • using governance teams;
    • adopting policies and procedures;
    • training; and
    • testing.
  3. Although not focused on financial services, President Biden’s October 2023 Executive Order on the safe and trustworthy development of AI addresses privacy concerns and potential model bias, both of which are relevant to advisers.
  4. An ongoing SEC exam sweep is assessing firms’ use of AI and the governance mechanisms discussed in SIFMA’s white paper.

The PDA Rule would also require strong cybersecurity governance and protections, said ACA managing director Christine Tetherly-Lewis. Additionally, firms would have to identify and eliminate conflicts of interest associated with covered technology. Finally, they would be required to maintain documentation showing how an AI tool was evaluated and tested. Although the rule focuses on elimination of conflicts, firms should also consider other potential limitations of new technology; understand why and how staff plan to use it; and evaluate the risks of deploying it.

In light of those developments, Broaded advised firms considering using AI to implement an appropriate governance framework by:

  • establishing a governance committee;
  • taking an inventory of AI risks;
  • developing acceptable use policies;
  • reviewing their AI disclosures and marketing content;
  • communicating and training staff on AI governance and policies;
  • conducting appropriate due diligence on AI vendors; and
  • periodically re-evaluating this framework.

See our two-part series on the practicalities of AI governance: “AI Governance Gets Real: Tips From a Chat Platform on Building a Program” (Feb. 1, 2023), and “AI Governance Gets Real: Core Compliance Strategies” (Feb. 8, 2023).

Custody

In February 2023, the SEC proposed a new Rule 223‑1 under the Investment Advisers Act of 1940 (Advisers Act), which would be known as the “Safeguarding Rule,” to replace the current Custody Rule (Rule 206(4)‑2). According to Broaded, the Safeguarding Rule would:

  • significantly expand the types of assets subject to the rule;
  • extend the definition of “custody” to include discretionary trading;
  • require advisers to enter into agreements with custodians directly and notify clients and private fund investors of new custodial arrangements;
  • impose on advisers an advance and ongoing due diligence requirement with respect to the accounting firms that conduct surprise exams; and
  • establish new requirements for foreign custodians.

If the Safeguarding Rule is adopted, there will be a 12‑ to 18‑month compliance period. The SEC has received multiple comments, especially regarding digital assets. Advisers that hold assets that might become subject to the new rule or have multiple custodial relationships should monitor developments carefully, Broaded suggested.

See “Fund Managers Must Ensure Adequate Security Measures Under Safeguards Rule or Risk SEC Enforcement Action” (Oct. 6, 2021); and “Understanding Cyberattacks on Digital Asset Platforms” (May 17, 2023).

Off-Channel Communications

Off-channel communications remains a big risk, according to di Florio. The February 2024 round of SEC settlements resulted in $81 million in aggregate penalties. Total fines across the three waves of SEC enforcement actions in this area now exceed $3 billion.

The SEC uses electronic communications reviews both for examinations and enforcement, noted Broaded. It continues to find pervasive use of off-channel communications. Although the SEC’s primary focus has been on broker-dealers, the same concerns apply to advisers. Scrutiny will continue until firms show “effective compliance, supported by adequate resources, technology and analytics,” warned di Florio. The recent enforcement actions suggest that firms must do more than simple keyword searches or random sampling, added ACA managing director Leigh Emery. Firms must also be able to document their surveillance, training and employee attestations.

Firms should assume their employees are communicating on unapproved channels and proceed accordingly, Broaded advised. CCOs must ensure “comprehensive capture.” To accomplish this, advisers should leverage technology, training, testing and quarterly certifications. The ability to analyze the collected data has improved significantly in recent years. Advisers should use what they capture to improve compliance.

Additionally, FINRA will review how broker-dealers are monitoring communications, as well as their policies and procedures for text messaging and mobile devices, noted ACA managing director Francois Cooke. It will also focus on detecting unapproved communication channels when conducting email reviews.

See “Recent Developments in SEC, DOJ and Civil Litigation Efforts Targeting Off-Channel Electronic Communications” (Aug. 16, 2023); and “SEC and CFTC Continue to Penalize Firms for Electronic Communications Recordkeeping Violations” (Sep. 20, 2023).

AML

“Expect increased focus on sanctions, AML and financial crime,” di Florio cautioned. The Treasury Department recently proposed new rules that would subject investment advisers to the same AML requirements applicable to banks and broker-dealers. Cooke explained that the SEC has focused on whether programs are properly tailored and suspicious activity report filings, while FINRA has focused on new account frauds and inadequate responses to red flags.

See “Navigating the Intersection of Digital Assets and AML” (Jun. 29, 2022).

Broker-Dealer Compliance Issues

Reg BI

Regulatory reviews of compliance with Reg BI have become more substantive over time, Cooke said. In the first year after Reg BI’s adoption, the SEC and FINRA focused on training, policies and procedures. The next year, their reviews expanded to include disclosures on Form CRS. They are now likely to focus on how firms are demonstrating that they are acting in the best interest of clients, especially with respect to alternatives to recommended investments. Consequently, dual registrants and broker-dealers with affiliated advisers should document why they recommended a brokerage or an advisory account to a particular client.

Trading

In both equity and fixed income trading, regulators will assess the accuracy of consolidated audit trail reporting, noted Cooke. Other areas of focus include:

  • pricing of fixed income trades;
  • obtaining “locates” on short sales;
  • best execution; and
  • market access.

New Regulatory Requirements

Cooke noted that there are four additional developments that will affect registrants:

  1. Corporate Transparency Act: The new Beneficial Ownership Reporting Rule requires entities created or registered in the U.S. to disclose beneficial owners and control persons. Entities formed prior to January 1, 2024, have until the end of the year to report.
  2. T+1 Settlement: The new T+1 regime takes effect May 28, 2024. Broker-dealers will be required to have written agreements or policies and procedures to ensure that allocations, affirmations and confirmations are completed no later than the end of the trade date. Additionally, advisers will be required to make and keep records of all allocations, affirmations and confirmations.
  3. Covered Agency Transactions: FINRA Rule 4210 covers “to be announced” transactions, collateralized mortgage obligations and “specified pool transactions.” In August 2023, FINRA amended this rule to require broker-dealers to collect daily mark-to-market margin and enforce written risk limits. Advisers report that some broker-dealers are already requesting the master securities forward transaction agreements that will implement the margin requirement.
  4. Securities Lending: In October 2023, the SEC adopted Rule 10c‑1a, which requires intermediaries to report securities lending information to FINRA. FINRA has not yet established the reporting mechanism.

See “Financial Services 2024 Privacy, Cybersecurity and AI Regulation Overview” (Feb. 14, 2024).

Cybersecurity

Regulators are focusing on the prevention of and response to cyberattacks, noted Cooke. The SEC continues to find cybersecurity-related deficiencies on exams, di Florio said. According to Tetherly-Lewis, if adopted, proposed Rule 206(4)‑9 would require advisers to:

  • adopt and implement appropriate policies and procedures;
  • disclose relevant risks and risk management practices;
  • rapidly report cybersecurity incidents;
  • maintain records of policies, procedures, risk assessments and testing; and
  • ensure greater board oversight of, and a firmwide approach to, cybersecurity risk.

See our two-part series on cybersecurity practices for private equity sponsors and their portfolio companies: “Incident Prevention and Response” (Feb. 28, 2024), and “Due Diligence and Post-Acquisition Efforts” (Mar. 6, 2024).

Marketing Rule

Rule 206(4)‑1 under the Advisers Act (Marketing Rule), which took effect in November 2022, remains a significant SEC focus area, ACA director Rosellen Bounds noted. A recurring theme in recent enforcement actions is that advisers must have appropriate policies and procedures to govern their marketing practices – especially presentation of hypothetical performance. Examiners have found many deficiencies on substantiation of material facts, as well as performance calculations and reporting, di Florio observed.

A Marketing Rule exam sweep is in progress. Examiners will focus on testimonials, endorsements and third-party ratings, according to di Florio. The Division of Examinations is likely to issue a risk alert with its observations later this year, Bounds added. On February 6, 2024, the SEC issued an updated Marketing Rule FAQ on time periods and methodologies for calculating gross and net returns.

Finally, Bounds reminded FINRA members that FINRA Regulatory Notice 20‑21 requires presentation of internal rates of return on unrealized or partially realized holdings to be consistent with the Global Investment Performance Standards (GIPS). Some firms have moved from being GIPS-consistent to being GIPS-compliant, she added.

ESG

The regulatory landscape affecting consideration of ESG factors in investing is complex and evolving rapidly, noted ACA managing director Julian Seelan. There is some convergence across regions. One area of note is the prevention of greenwashing, which is commonly defined as the act of making false or misleading statements about the ESG benefits of a product or service. In December 2023, the International Organization of Securities Commissions (IOSCO) issued supervisory practices to address greenwashing, which regulators have been approaching in different ways.

See our two-part series: “Making Sense of Evolving Regulations, Recent Enforcement Efforts and Antitrust Claims as to ESG Investing in the U.S. and E.U. (Part One of Two)” (May 10, 2023), and “How to Navigate the Rough Waters and Turning Tides of U.S. States’ Anti-ESG Movement and Europe’s Pro-ESG Measures (Part Two of Two)” (May 31, 2023).

U.S. Landscape

“ESG has now become mainstream at the SEC,” di Florio remarked. It has issued three new or proposed rules and brought a steady stream of enforcement proceedings involving ESG and climate. The SEC could adopt its proposed ESG disclosure rules for investment advisers as early as this April, Seelan said. The rules would require updated ESG disclosures in Form ADV. In the interim, advisers are reviewing their disclosures and documentation to ensure alignment with their ESG-related activities.

In September 2023, the SEC updated the Names Rule for investment funds to address the product-labeling element of greenwashing, Seelan explained. Pursuant to the amended Names Rule, if a registered fund’s name suggests a focus on ESG, the fund must invest 80% of the fund’s value in ESG-related investments.

There is also a great deal of activity on ESG investing at the state level, continued Seelan. For example, California’s proposed climate rules could affect private equity portfolio companies and certain large asset managers. Firms must contend with conflicts among SEC, state, U.K. and E.U. ESG regulations. There is greater urgency on ESG issues in the U.K. and E.U. than in the U.S., he added.

U.K. Landscape

The U.K. has one of the most complex ESG regimes, according to Seelan. Relevant regulations include the Anti-Greenwashing Rule, the Task Force on Climate-Related Financial Disclosures (TCFD) and the Sustainable Finance Disclosure Regulation (SFDR), which applies to U.K. firms that market in the E.U.

The Anti-Greenwashing Rule, whose compliance date is May 31, 2024, affects all U.K.-regulated firms, said Seelan. It requires firms to ensure sustainability claims about a product are fair, clear and consistent. Moreover, the product’s sustainability characteristics must reflect all of the claims. To prepare, firms are reviewing all marketing materials, websites and disclosures – including disclosures under the SFDR – and ensuring they have documentation to support all their claims.

The TCFD takes effect on June 30, 2024, for all asset managers, pension providers and life insurance companies with more than £5 billion in AUM. It took effect last year for firms with more than £50 billion in AUM. Firms must make disclosures consistent with TCFD recommendations. Disclosures primarily concern a firm’s entity-level and product-level climate programs. Firms should compare their current activities with the TCFD’s recommendations, with a view to aligning them with the regime, Seelan advised.

See “E.U. Takes Lead on AI and Climate Change Via ESG Regulation” (Jan. 10, 2024).

U.K. Developments

The U.K. Financial Conduct Authority (FCA) has not yet turned its attention to either AI or electronic communications, noted ACA director Charlotte Longman. Although there are no “huge fireworks” on the U.K. regulatory front, firms must keep abreast of several changes. The pending or potential changes affecting diversity and inclusion (D&I), the U.K. Senior Managers and Certification Regime (SMCR) and overseas funds regime (OFR) are all intended to advance the agency’s competitiveness agenda.

Misconduct, D&I and SMCR

The FCA is expected to issue final rules on non-financial misconduct later this year, which are expected to formalize its prior guidance on the subject, said Longman. They will take effect 12 months after publication. Firms have asked for clearer guidance on what constitutes non-financial misconduct. Bullying and harassment are likely to be included.

The FCA has expressed the view that diverse workplaces can not only result in better decision making and product design but also improve the U.K.’s competitiveness, noted Longman. To that end, the FCA has proposed new data reporting requirements for the majority of firms. Firms with more than 250 employees will be required to design a public D&I strategy with objectives, goals and targets. They will also be subject to additional reporting, including staff demographics.

Last year, the FCA sought input on the effectiveness, scope and proportionality of the SMCR. This year, the agency is likely to consult on enhancements to make the regime more efficient and effective and avoid discouraging talent from coming to the U.K. It will probably result in some fine-tuning but not a complete overhaul, Longman opined.

See “U.K. Equifax Fine Calls for Stricter Parent-Subsidiary Data-Sharing Processes” (Oct. 25, 2023).

Payment for Investment Research

In contrast to the soft dollars regime in the U.S., the recast Markets in Financial Instruments Directive (MiFID II) requires U.K. firms to pay for investment research either out of their own resources or through a research payment account funded by clients, Longman explained. The unbundling of trade execution and research has not achieved the desired results of price transparency and new sources of research. A U.K. government-commissioned report made seven recommendations regarding investment research, one of which was to permit payment for certain research on a bundled basis. Although investment managers might welcome this third option, it remains to be seen whether allocators or asset owners will accept it.

EMIR REFIT

The pending regulatory, fitness and performance assessment (REFIT) of the European Market Infrastructure Regulation (EMIR) will result in a significant change in derivatives reporting, Longman said. Both the E.U. and the U.K. are implementing expanded reporting frameworks. The E.U. revisions take effect in April 2024, while the U.K.’s take effect in September 2024. In addition to the divergent implementation dates, there are substantive differences in the new reporting requirements. Firms might have to report under old regimes and new regimes simultaneously.

Overseas Funds Regime

Brexit ended E.U.-domiciled funds’ unrestricted access to the U.K., noted Longman. The Overseas Funds Regime (OFR) will enable funds domiciled in jurisdictions approved by the U.K. Treasury to market to U.K. retail investors, she explained. The regime is predicated on the concept of “equivalence.” The government recently announced that it will grant equivalence to E.U. Undertakings for the Collective Investment in Transferable Securities funds. Additionally, to smooth the transition to the OFR, it is extending through 2026 the temporary arrangements that authorized certain other European funds to be marketed in the U.K.

Use of Compliance Technology

Use of compliance technology is crucial, according to Cooke. Regulators are focusing on how firms use technology in compliance processes, including:

  • surveillance;
  • identifying off-channel communications and policy violations;
  • client relationship management for Reg BI compliance; and
  • code of conduct and employee monitoring.

Firms can use technology to aid in complying with evolving regulatory obligations, Emery said. When a firm’s data “lives under one roof,” it is more reliable, more powerful and easier to access. Digitized information stored in a relational database can help advisers incorporate and adapt as needed to regulatory changes. “Structuring your program in an integrated and a thoughtful way with a single source of truth, wherever that’s possible, will really set you up for success,” she said.

“Using available technology, if a regulatory change occurs, an adviser should be able to make the change in one place and the system will reflect the change in the other relevant parts of the compliance program,” Emery explained. For example, a finding in a compliance review or test should trigger a review of the associated risks and controls, followed by a review of the associated policies and procedures to ensure alignment and any relevant employee training or certifications. Technological solutions can facilitate that process and the corresponding documentation. They are also well-suited to tracking side letters and fund expense matrixes.

AI is being used in regulatory technology, especially for reducing false positives in traditional surveillance mechanisms and escalating potential risks for human review, continued Emery. It is also being used to automate certain repetitive or labor-intensive tasks. Of course, firms must ensure the data to which the AI is applied is accurate and reliable.

Firms should update their procedures to incorporate any new technology solutions and address regulatory focus areas and new requirements, Cooke advised. Their risk management programs should:

  • address not only regulatory risk but also risks associated with counterparties, market liquidity, vendor management and conflicts of interest;
  • ensure remediation of any identified issues and documentation of that remediation;
  • develop and use appropriate metrics; and
  • establish committees for risk mitigation and remediation.

See “Using RegTech to Enhance Compliance” (Jun. 30, 2021).

People Moves

Biometric Privacy Team Moves to Blank Rome


Three litigators focused on biometric privacy have joined Blank Rome in its Chicago office. Daniel Saeedi, who will co-lead the firm’s biometric privacy team, and Rachel Schaller arrive as partners, along with associate Gabrielle Ganze. The trio joins from Taft Stettinius & Hollister.

The group has litigated class actions under Illinois’ Biometric Information Privacy Act (BIPA), as well as cases involving employee privacy tied to an assortment of other technologies, and data breaches and website accessibility lawsuits. The attorneys also have served as amicus counsel on BIPA matters.

The team additionally has experience advising on best practices for biometric privacy as well as new and proposed laws addressing biometrics.

Moreover, Saeedi and Schaller help clients conduct internal investigations involving data privacy and computer fraud issues.

For insights from Blank Rome, see our two-part series on the shifting BIPA landscape: “Notable Trends and Developments” (Sep. 7, 2022), and “Avoiding Liability” (Sep. 14, 2022).