Online Advertising

Tracking Technologies: Compliance Challenges and Solutions


The ecosystem and dynamic state of digital technologies, and the legal regimes around them, present companies with complexities that require thoughtful compliance and risk-mitigation strategies. This final installment of a four-part article series examines some of those compliance challenges and solutions specific to the digital advertising industry, as well as broader tracking use litigation risks and mitigation steps.

Part one kicked off this article series with a comprehensive review of the legal landscape around digital tracking. Part two took a deep dive into the technical workings and types of digital data collection tools. Part three provided a roadmap for organizations starting out – or working toward – crafting a comprehensive, cross-functional program for managing digital trackers.

See “Benchmarking the Impact of State Privacy Laws on Digital Advertising” (Oct. 11, 2023).

Ad Industry Challenges and Solutions

The complexity of data flows in the digital advertising industry, particularly with respect to programmatic advertising, requires a robust cross-functional approach to privacy compliance. That involves not only the types of data mapping, scans and other activities that happen within the four walls of companies and have been discussed in prior installments of this series, but also leveraging industry solutions to effectuate compliance.

Compliance Hurdles in a Complex Ecosystem

The OpenRTB technical specification, which undergirds programmatic advertising, created efficiency in the digital ad supply chain that greatly benefited advertisers, but also created numerous interconnectivity points involving disclosures of personal information. As a practical matter, to deliver and measure a single programmatic ad, there can be dozens of “sales” of personal information, such as when supply-side platforms, ad exchanges or mediation platforms send out bid requests containing personal information in relation to a particular consumer (or their associated device), when measurement companies and other vendors include pixels in the ad impression, and more.

The complexity of the digital advertising ecosystem with respect to data exchanges, whether in the bidstream or facilitated by digital trackers, creates material compliance challenges for organizations. The obligations imposed by the California Privacy Rights Act (CPRA) amendments to the CCPA highlight these compliance challenges. Before this amendment, the CCPA required “businesses” to enter into contracts with their “service providers” containing certain privacy-protective provisions. While not underestimating the challenges of the contracting process, companies at least knew (or should have known) who their service providers were.

The CPRA amendment, however, greatly expanded the contractual requirements such that all “sales” of personal information to “third parties” must also be supported by contracts with prescriptive privacy provisions. While this requirement undoubtedly serves a very important privacy value, compliance can be challenging for certain “sales” that take place in the digital ad supply chain because, in at least some cases, entities disclosing and receiving personal information between each other do not have a formal business relationship governed by a commercial agreement.

For example, when an ad renders on a publisher’s page, the publisher’s ad server typically must disclose the consumer’s IP address to the advertiser’s ad server to retrieve the ad. No money is exchanged by the ad servers, and, as such, historically these ad servers have not entered into contracts with each other. Indeed, the publisher typically does not know which advertiser will win a particular bid to serve the ad or which advertiser’s ad server will be used. Another example occurs when an ad renders on the publisher’s digital property, and pixels or tags from advertiser-engaged vendors fire from within the ad impression itself. In permitting this to happen on its digital property, the publisher “makes available” personal information, such as IP address or other identifiers, to those vendors but the publisher typically does not have an agreement with those vendors or the underlying advertiser that has engaged those vendors as the advertiser’s service provider. And again, the publisher often does not know which vendors will show up in the impression itself.

See “Lessons From California’s First CCPA Enforcement Action” (Sep. 28, 2022); and “Lessons From California’s DoorDash Enforcement Action” (Mar. 6, 2024).

IAB Solutions

Such compliance challenges in the digital advertising context, including the numerous “sales” occurring at different points in the digital ad supply chain, necessitate that the industry come together to create compliance solutions that publishers, advertisers and adtech vendors would not be able to easily solve in individualized campaign transactions. Recognizing these challenges, the Interactive Advertising Bureau (IAB) has provided leadership in bringing industry stakeholders together and formulating proposed legal and technical solutions that complement each individual company’s efforts to achieve compliance.

In the U.S., IAB Privacy’s Multi-State Privacy Agreement (MSPA) provides a solution for the aforementioned gaps in contractual privity. The MSPA is a contract with privacy terms that “spring into place” among its network of over 1,200 signatories throughout the digital ad distribution chain. In other words, when a publisher’s ad server “sells” personal information to an advertiser’s “ad server” in the context of an MSPA transaction, and everyone is an MSPA signatory, the MSPA’s contractual terms follow the data and endeavor to create the contractual privity between the participants that the law now requires.

More broadly, the MSPA creates a common set of privacy terms and principles throughout the distribution chain that seek to raise the bar for privacy and, in doing so, serve as a transparent tool to help companies achieve compliance with the ever-growing number of U.S. state privacy laws. For example, state privacy laws and implementing regulations increasingly require due diligence of counterparties with respect to data practices. The MSPA creates a compliance paradigm in which publishers and advertisers know the specific privacy terms that attach to personal information as it traverses the digital ad supply chain. Moreover, the MSPA avoids a party having to face a counterparty that has creative or myopic views of the how the privacy laws apply and seeks to link those views to privacy provisions that travel down the distribution chain. The MSPA sets a common set of compliant privacy terms for all market participants to point to.

The MSPA also creates a multistate compliance framework that provides publishers and advertisers with an option to employ a national approach that leverages a highest common denominator across the state privacy laws and transmits privacy choices through the digital ad supply chain using the IAB Tech Lab’s Global Privacy Platform signaling specification.

The complexity of the digital ad supply chain similarly necessitates an industry solution to comply with the ePrivacy Directive and the GDPR. IAB Europe’s Transparency & Consent Framework (TCF) stitches together publishers, consent management platforms and adtech companies in a common framework to achieve compliance with the GDPR and ePrivacy Directive’s applicable transparency and choice requirements. As the landscape has evolved in Europe, regulators are increasingly making clear that consent is required for behavioral advertising and may be required for activities such as measurement of digital ads. Given the unique role that publishers have in managing relationships with consumers, the TCF standardizes the means for obtaining consent from consumers for those publishers and downstream companies. Like the MSPA, the TCF relies on an IAB Tech Lab technical specification to transmit user privacy choices to companies participating in digital ad transactions.

Finally, the IAB Tech Lab is completing work on a deletion specification to solve for state privacy and GDPR deletion requirements. The comment period closes April 22, 2024. Again, the CPRA’s amendment to the CCPA highlights the challenges of operationalizing certain requirements in the digital ad supply chain. Before this amendment, the CCPA required businesses to pass deletion requests to their service providers to act upon. The CPRA significantly expanded the scope of the deletion obligation, causing businesses to pass deletion requests to all third parties to whom the business “sells” personal information. That includes not only “sales” of personal information to other parties in the bidstream, but also “sales” of personal information that is transmitted by publishers or advertisers through tracking technology. Given that a single ad can have dozens of “sales” associated with it, practical questions arise about how companies can achieve compliance. The IAB Tech Lab is addressing this fundamental challenge with its anticipated deletion specification, which will provide a standardized and interoperable framework to pass deletion signals to third parties and service providers.

For organizations involved in any aspect of the digital advertising ecosystem, particularly those engaged in or supporting programmatic advertising, industry solutions, such as those created by IAB, should be evaluated as a potentially important component to the company’s privacy compliance program.

See “IAB Unveils Multistate Contract to Satisfy 2023 Laws’ Curbs on Targeted Ads” (Feb. 22, 2023).

Tracker Litigation Risk and Mitigation

Even organizations that implement a gold-standard tracker governance program and utilize both state-of-the-art technology tools and evolving industry self-regulatory solutions face significant risk of privacy class action litigation in the U.S. related to tracking technologies.

See “Google’s Wiretap Cases Highlight Evolving Privacy Transparency Standards” (Jan. 24, 2024).

VPPA and CIPA Claims

In addition to the most recent wave of Video Privacy Protection Act (VPPA) cases that have focused on social media pixels used in connection with website video content, the plaintiffs’ bar is continuing to test case theories under the California Invasion of Privacy Act (CIPA). Not deterred despite many case dismissals under wiretap provisions of this law, a new flavor of these cases has taken hold in 2024 based on arcane provisions of CIPA that restrict use of “pen register” or “trap and trace” devices without a court order. Both class action and individual lawsuits have been filed, in addition to scores of claim letters being issued, asserting the credulity-stretching theory that website tracking technologies, even those used just for basic site analytics, violate these provisions that traditionally have been limited to physical devices (typically used by law enforcement) that record numbers dialed from a specific telephone line or the originating numbers of calls placed to the line.

See Cybersecurity Law Report’s two-part series on website-tracking lawsuits: “A Guide to New Video Privacy Decisions Starring PBS and People.com” (Mar. 29, 2023), and “Takeaways From New Dismissals of Wiretap Claims” (Apr. 5, 2023).

Steps to Avoid Risk

As courts are put in the unenviable position of trying to make sense of the latest CIPA claim theory, organizations should consider taking the following steps to try to avoid being on the receiving end of a complaint or claim letter.

  • Privacy Disclosures: First and foremost, make sure your website privacy disclosures are accurate, robust and presented in a manner that does more than just tick the box of bare-minimum compliance. They should provide your company with the strongest protections against CIPA claims.
  • Cookie Banners: Consider implementing a cookie banner on your website that is tailored for the unique risks posed by CIPA litigation. Banners should inform website visitors of the collection or recording of information through the use of tracking technologies and incorporate a form of consent – either implied or express, depending on the level of risk tolerance.
  • Suppressing Riskier Trackers: If a cookie banner is used, consider configuring it to suppress riskier trackers until consent is provided, at least for California visitors. Riskier trackers may be ones that transmit the contents of communications, infer sensitive data or involve sharing data with third parties that have the right to use collected data for their own purposes.
  • Unnecessary Trackers: As noted in part three of this series, engage in regular review of trackers incorporated into websites and apps, and remove trackers that are no longer providing material business benefit.

See “After Death of the Cookie, New Advertising Strategies Raise Compliance Questions” (Sep. 2, 2020).

Future-Proofing

As part of an overall digital tracking program, when implementing the practicalities above with the governance guidance set forth in part three, it is critical to consider ways to ensure the program is sufficiently nimble and scalable to adapt to both the organization’s evolution and the dynamic state of digital technologies and legal regimes surrounding them.

Professionals with ownership over the program should keep pace with the growing risks and complexities around tracking technologies, as it is likely that organizations in the digital sphere will be facing ever-increasing risk and compliance requirements in the years to come. They should ensure that the company’s overarching business and technology strategic processes include mechanisms to flag any changes to digital products, advertising methods and tech infrastructure that may impact tracker tech risk posture, compliance approaches or both. Business, technical and revenue teams also should be updated on legal and regulatory changes on the horizon that may impact compliance implementation and strategic planning.

 

Michael Hahn is executive vice president and GC at IAB and IAB Tech Lab. He has responsibility for all legal matters, including the direction of legal strategy, privacy compliance, antitrust compliance, intellectual property rights issues and general corporate matters. Hahn also serves as an advocate for the digital advertising industry on common legal issues affecting member companies.

Leslie Shanklin is head of Proskauer’s global privacy & cybersecurity group, a partner in its corporate department, and a member of its technology, media & telecommunications group. Prior to joining the firm, she led global privacy teams for media and entertainment companies for over a decade and most recently served on the privacy leadership team for Warner Bros. Discovery. Her practice focuses on privacy and data security, delivering comprehensive expertise around data-related risk and compliance.

Julie Rubash is GC and CPO for data privacy software company Sourcepoint, where she coordinates legal efforts and ensures that the product suite innovates and expands to meet the demands created by the changing regulatory landscape. Rubash brings over 15 years of legal experience, both at law firms and as internal counsel in the media, technology and advertising sectors. Prior to joining Sourcepoint, she served as vice president of legal at advertising platform Nativo.

Artificial Intelligence

AI Governance Strategies for Privacy Pros


Privacy and compliance teams have become the front line at many organizations for ensuring the right safeguards are in place as new generative AI tools are being purchased and used by the business. If you ask 10 people what AI governance is, you might get 10 different answers, noted Alan Wilemon, director at INQ Consulting, during a panel at the International Association of Privacy Professionals’ Global Privacy Summit 2024.

The program’s speakers examined the current AI legal landscape, the key elements of AI governance, the intersection of AI and privacy, a cross-functional approach to AI governance and the technical side of AI governance. Wilemon moderated the panel, which featured Benjamin Brook, CEO of Transcend; Daniel Goldberg, a partner at Frankfurt Kurnit Klein & Selz; and Tina Hwang, vice president and CPO at Ancestry. This article synthesizes their insights.

See our two-part series on the practicalities of responsible AI: “AI Governance Gets Real: Tips From a Chat Platform on Building a Program” (Feb. 1, 2023), and “AI Governance Gets Real: Core Compliance Strategies” (Feb. 8, 2023).

Legal Landscape

Existing U.S. Regulation

The AI regulatory space has been described as “a soupy mess,” noted Hwang. The legal landscape is evolving rapidly. There is no comprehensive AI law in the U.S. Instead, there have been multiple state-based and/or sectoral efforts which, directly or indirectly, affect use of AI, including:

  • Utah’s AI Policy Act;
  • New York City Local Law 144, which prohibits use of automated employment decision-making tools unless there is an annual bias audit and appropriate notices;
  • health- and biometric-specific laws like Illinois’ Biometric Information Privacy Act and Washington’s My Health My Data Act;
  • insurance-related regulation in Colorado and New York;
  • disclosure laws governing consumer interactions with bots;
  • comprehensive state privacy laws;
  • state consumer protection laws; and
  • the FTC Act’s prohibition on unfair and deceptive practices.

Additionally, the Federal Office of Management and Budget recently issued a binding memorandum concerning federal agencies’ use of AI, noted Goldberg. Although it applies only to agencies, many of the obligations will trickle down to providers and vendors, raising their compliance obligations.

See “Financial Services 2024 Privacy, Cybersecurity and AI Regulation Overview” (Feb. 14, 2024); and our two-part series on New York City’s AI audit law: “What Five Companies Published” (Sep. 13, 2023), and “Best Practice Guide” (Sep. 20, 2023).

FTC Leadership

The FTC has taken a leadership role with respect to AI enforcement, Goldberg continued. Its first true AI case was its December 2023 settled enforcement action against Rite Aid, which concerned alleged unfair use of facial recognition technology in violation of the FTC Act. Notably, Rite Aid was prohibited from using facial recognition technology for five years; required to destroy data models and algorithms; and required to direct its vendors to do the same. Additionally, the FTC has been sending warning letters concerning deceptive advertising of AI functionality and looking into other AI-related issues.

See our two-part series on the FTC’s Rite Aid order: “A Strong Message to Users of Biometrics and AI” (Jan. 24, 2024), and “Expanded Algorithm Disgorgement and a Compliance Roadmap” (Jan. 31, 2024).

E.U. AI Act and Pending U.S. Legislation

The E.U.’s new AI Act is also going to have a major impact on AI, according to Goldberg. It takes a high-level, risk-based approach.

In contrast, many U.S. laws and regulations are much more specific and granular. Notably, about one-quarter of the U.S. state legislatures are currently considering bills to regulate AI. Those bills take different approaches, including:

  • prohibiting use of AI tools that result in algorithmic discrimination, such as California’s Assembly Bill 2930; and
  • giving consumers certain rights with respect to AI and/or prohibiting use of AI in certain areas.

Additionally, pending regulations under the CCPA, which could be finalized in early 2025, will address use of personal information in automated decision- making technology (ADMT), said Goldberg. They may require, for example:

  • pre-use notice regarding use of ADMT;
  • offering an opt-out of ADMT;
  • access to information about how a person’s data is used in ADMT; and
  • notice of significant adverse decisions.

See “Examining Utah’s Pioneering State AI Law” (Apr. 3, 2024); and “New AI Rules: States Require Notice and Records, Feds Urge Monitoring and Vetting” (Jun. 22, 2022).

Framing the Issues and General Governance Strategy

Invest Now

One of the biggest challenges is convincing a business to invest now for something in the future, when you do not know precisely what it will look like, according to Goldberg. “I can’t point to one specific law and say, this is how you comply with it. Here are the penalties,” he observed. Nevertheless, investing in appropriate architecture now could save considerable time and expense down the road.

Another reason to get AI right from inception is the difficulty of going back and removing data. For example, a company that uses AI might get a data subject’s request to delete personal information, Goldberg said. Once that data is embedded in the model, it can be very challenging, if not impossible, to remove. For all intents and purposes, once a model is running, it is too late to remove data, Brook concurred. It is generally not possible to untrain the model or delete data from it. Ideally, all sensitive data should be scrubbed from the proposed training set prior to training.

Focus on Harm Mitigation

A principal concern over AI, whether predictive or generative, is what kind of harm it can do both on a societal level and an individual level, Brook said. Organizations deploying AI must determine how to set meaningful controls to mitigate that harm. AI involves extremely complex systems and massive data sets, and users have very little understanding of the models. The technology is “essentially inscrutable,” he observed.

Consider the Context

“There is no one clear set checklist that is going to get you to the right answer” on AI governance, said Hwang. It depends on the context, including:

  • how it is used;
  • how it will be deployed;
  • who will use it;
  • whether a vendor is involved, and
  • if a vendor is involved, which one.

All of those things matter. The key is for an organization to understand the risks strategically and evaluate those risks in the context of what it is trying to accomplish, added Hwang.

The first step in AI governance is to inventory and assess how an organization is using AI, Brook explained. That will assist in preparing for compliance with regulatory requirements like the E.U. AI Act. The second step, as in the privacy space, is to establish appropriate policies as to how AI should be deployed.

AI can be deployed in different ways, Goldberg said. For example, employees may wish to run source code through ChatGPT. In that case, a company needs to develop policies regarding that potential use. Alternatively, if a vendor is using AI, the company should be conducting due diligence to determine how it is being deployed. Finally, a company may be developing its own large language model (LLM) or modifying an existing LLM internally. Each of those use cases has an impact on the company’s risk and what it can do about it.

See “Welcome to the GPT Store – and Its Three Million Security Uncertainties” (Mar. 27, 2024).

Resist the Urge to Oversimplify

A common pitfall is to oversimplify and say “this is okay” or “this is not okay,” Hwang remarked. The former may be insufficiently conservative in light of the relevant risks, while the latter may be too conservative. A company should resist the urge to oversimplify and “really lean into the context and understand the nuances of what is happening,” she advised. Moreover, even if a company adopts a blanket prohibition, employees will find a way around the policy, Goldberg added. Thus, companies should seek practical solutions, rather than just saying “no.”

See “Dos and Don’ts for Employee Use of Generative AI” (Dec. 6, 2023).

Deploy Technical Guardrails

Because people may work around policies, Brook said, the greatest challenge may be developing technical guardrails around AI. These include determining:

  • if policies are being violated;
  • whether to block certain uses; and
  • whether the system is generating toxic, biased or inappropriate responses or cross-pollinating personal data.

“There’s very little visibility. There’s very little actual control. And so nobody really knows if or when something went awry until you see the screenshot on Twitter,” Brook remarked.

Do Not Count on a North Star

Compliance professionals often think about strategic compliance or principled compliance that is anchored on a “North Star,” according to Hwang. In the privacy context, CPOs may do gap assessments against the GDPR or CCPA and strategically address those gaps. Unfortunately, there is no North Star for AI yet. The closest may be the E.U. AI Act. “And if you ask this question one year from now, [the answer] may be totally different,” because the legal landscape is evolving rapidly, Goldberg remarked.

See “New AI Rules: Five Compliance Takeaways” (Jul. 13, 2022).

AI and Privacy

Many of the risks AI presents are already familiar to privacy professionals, Hwang noted. What is different is their scale and context. Privacy professionals are drawing on privacy principles and applying them in this new context, which may have more unexpected outcomes.

There is a natural connection between privacy and AI, Goldberg noted. Although AI laws may focus more on technology than on personal data, many still recognize concerns over use of personal or sensitive data. AI also presents issues around intellectual property rights.

Privacy professionals are also wrestling with an “ethics quagmire,” asking not only “can we” deploy AI in compliance with applicable law and regulation, but “should we?” Hwang observed.

“Should we?” is not a new question to privacy professionals, Goldberg added. When a business team asks them about a fantastic idea, they may reply, “Wait a second, is that really something we should be doing?”

“Having a strong foundation with a strong privacy program is the best thing you can do to start with AI,” Brook opined. As with privacy, AI governance entails understanding the data the organization holds, having meaningful ways of encoding preferences into its systems, giving users appropriate choices and implementing suitable monitoring.

How an organization approaches AI governance will depend on “what type of data you’re using and what type of industry you’re in,” added Goldberg.

See our two-part series on managing legal issues arising from use of ChatGPT and Generative AI: “E.U. and U.S. Privacy Law Considerations” (Mar. 15, 2023), and “Industry Considerations and Practical Compliance Measures” (Mar. 22, 2023).

Building an AI Governance Program With a LADEL

Organizations must take a cross-functional approach to AI, advised Hwang. A privacy officer tasked with addressing AI should seek to leverage existing programs, as well as both internal and external resources. To navigate the current “soup of AI regulation,” organizations can use a “LADEL” – learning, aligning, developing, evolving and leveraging, she explained.

Learn About How AI Is Used

The first step is to “really learn the technology and really lean it to how your company intends to [use AI],” which can be very difficult for people without technical expertise, said Hwang. Even so, an understanding of the technology is critical for developing a governance program. There are many available white papers and other resources on AI, including documentation provided by the big AI players. Moreover, those players may be looking for client feedback.

Do not let perfect be the enemy of good, Hwang advised. Start learning about AI and begin work on a governance program, which will evolve as you learn more.

Align Governance, Business Goals and Values

An organization should align goals by bringing cross-functional leaders together to ask about the value proposition of using AI and whether it is consistent with the company’s values and brand, advised Hwang. For example, a company may think twice about deploying a chatbot if its customers expect a human being to pick up the phone to answer their questions.

Develop Responsible AI Principles

The third step entails developing principles regarding privacy, intellectual property, ethics, bias and other concerns, drawing on cross-functional partners and getting executive buy-in. Here, too, there are many available resources, including the White House Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology Artificial Intelligence Risk Management Framework. Companies may also seek to leverage their existing data governance structures, which may offer a cross-functional forum for evaluating AI cases, as well as the associated risks, controls and other potential mitigants. They may also rely on their vendor assessment and auditing processes.

Companies must also deploy technical controls, Hwang added. Having a policy and a risk-based stakeholder approach is important, but a company must also have a technical backstop to keep things on track or stop them if something goes wrong.

See “Navigating NIST’s AI Risk Management Framework” (Nov. 15, 2023); and “What the AI Executive Order Means for Companies: Seven Key Takeaways” (Nov. 8, 2023).

Evolve As Regulation Develops

AI governance is an ongoing process, Hwang continued. A governance program must evolve as laws and regulations develop and change and as the company itself changes.

Leverage AI Governance for Privacy

CPOs are “often not the most popular people in the company,” Hwang noted. When a CPO speaks about AI, however, the CPO may become “Miss Popularity.” CPOs can leverage the buzz around AI to reintroduce “privacy hygiene” to their organizations, which the organizations should have been doing all along. The new context generates renewed energy and enthusiasm for privacy-related conversations. Thus, discussions about AI governance present an opportunity to address broader data governance concerns.

AI governance does not have to be built from the ground up, Goldberg added. Organizations should be able to leverage their existing data governance programs, reiterating the importance of working with stakeholders across functions particularly because he has observed that different teams may use AI in different ways, which can vary from the company’s intended use.

See “IBM, eBay and Walgreens CPOs Outline 10 Steps for Building AI Governance” (Oct. 18, 2023).

Technical AI Governance

Most of the new AI laws are going to require documentation, auditing and monitoring, according to Hwang. Thus, appropriate technical controls and automation will be essential for an AI governance program. “And there’s simply no way to do that at scale without some type of technical ability to detect which systems are doing what with the data and why,” she cautioned.

Organizations’ vendors may start using AI to render services. To prepare for that situation, technical governance and policy governance will have to work together, Hwang said. Ideally, technological controls will be able to detect something different about the vendor’s systems and prompt an inquiry. Additionally, good AI governance may entail periodic vendor reassessments. “Annual due diligence is so incredibly important,” Goldberg stressed. Regulators are beginning to expect it.

Architectural Hygiene

Brook approaches technical AI governance through a hierarchy of needs pyramid. At the base of this pyramid is architectural hygiene, which is a foundational element. Had the GDPR been adopted in 1950, corporate data systems would have been designed quite differently, making it much easier to address privacy concerns. Organizations have that opportunity now with respect to AI, which will certainly be regulated, and there is still time to design an appropriate architecture to facilitate compliance. One way to do this is to funnel all LLM communications through a single “middleware layer,” rather than having multiple direct integrations of AI, Brook suggested.

Auditability

Auditability must be built into the underlying architecture. Having a single solution through which all LLM communications pass makes it easier to audit how the AI is being used, said Brook.

Monitoring

Once organizations have established auditability, they must seek to understand what is actually happening. There is often “zero visibility into what’s actually happening with all of these chatbots and generative AI models” that their employees are using, Brook cautioned. Thus, organizations should seek to build in the ability to monitor for uses and outcomes that violate their underlying policies. That way, they can catch function creep and keep sensitive data out of LLMs.

Policy Enforcement

Additionally, organizations must build mechanisms for enforcing their AI policies into their systems. Implementing technical rules and controls is much easier to do when all the AI models that the organization is using flow through a single point, noted Brook.

Insights

At the top of the pyramid are the insights an organization can draw from its auditing, monitoring and enforcement, including the return on its investment in AI and the costs associated with using it, said Brook.

Organizations should focus on getting technological AI governance and architecture right from inception. If they do not, they will be playing “whack-a-mole,” just as they have been with privacy. Similar concerns are at play with cybersecurity by design and privacy by design, Goldberg noted. Technical considerations must go hand-in-hand with legal and compliance ones.

See our AI Compliance Playbook series: “Traditional Risk Controls for Cutting-Edge Algorithms” (Apr. 14, 2021), “Seven Questions to Ask Before Regulators or Reporters Do” (Apr. 21, 2021), “Understanding Algorithm Audits” (Apr. 28, 2021), and “Adapting the Three Lines Framework for AI Innovations” (Jun. 2, 2021).

Tech Meets Legal

How to Achieve Privacy by Design With a Technical Privacy Review


The GDPR and other laws mandate privacy by design but it remains a vague obligation for companies. Companies rarely achieve it without skillfully conducting a technical privacy review (TPR) before products are released.

Among lawyers, the privacy impact assessment (PIA) has overshadowed the TPR. The PIA is a mainstay of detailed regulations around the world and has become an essential piece of privacy team paperwork. With the rise of privacy engineering, the TPR has emerged as a step to take before the PIA, intended to uncover privacy defects before developers advance their products or projects. “A lot of times we do a TPR before a PIA is contemplated because the goal is having positive privacy-by-design principles in place,” said Microsoft senior privacy product manager Jay Averitt. The TPR often collects details useful for subsequent PIA narratives.

Although uncelebrated, the TPR offers protections amid regulators’ more fine-grained inquiries and concern over organizations’ accelerating integration of AI. At the International Association of Privacy Professionals’ Global Privacy Summit 2024, Averitt and privacy engineers from Meta, DoorDash and Uber performed a simulated TPR scrutinizing a consumer app’s reliance on a large language model (LLM). Their simulation and subsequent commentary presented questions to ask in a TPR, common action items that result and advice for surmounting the tensions they illustrated between product engineers and privacy overseers. This article distills their insights and the simulation dialogue.

See “Effective Use of Privacy Impact Assessments” (May 4, 2022).

How a TPR Differs From a PIA

PIAs and TPRs both rely predominantly on interviews, but the TPR usually has a hands-on component, the panelists said. These undertakings differ in their emphases, goals and output.

Privacy engineers or cybersecurity teams (reviewers) often conduct TPRs, seeking to spot privacy shortcomings in emerging products and deliver technical recommendations to address gaps in designs. TPR documentation often includes “design artifacts” like system architecture diagrams and data routing.

In contrast, legal teams or privacy program managers typically conduct PIAs, which are intended to demonstrate adherence to regulatory and legal standards and manage risk. Lawyers may conduct TPRs if familiar with the components of system and software architecture, said Averitt, who has worked as an attorney.

The TPR reflects privacy engineering’s rise as a distinct sub-discipline. The exercise aims to ensure that privacy becomes an essential component of the core functionality that a product or feature delivers, serving as a front-line check to see that the company is genuinely protecting its customers’ privacy.

See “Cybersecurity and Privacy Teams Join to Create Data Governance Councils” (May 4, 2022).

Increasing Need to Delve Into LLM Integration

TPRs echo longstanding cybersecurity compliance efforts to document an organization’s implementation of technical and organizational measures (TOMs) to satisfy regulators and partners.

In addition to their use in connection with computing systems, TPRs challenge product engineers’ focus on a fully functional app experience for consumers, technically sweet features and their desire to stick the landing on integrating AI into the company’s products.

Now, the TPR process is also becoming a key checkpoint for companies’ use of LLMs. Overall, Averitt and other panelists described the central aims of TPRs as putting privacy into genuine practice, creating confidence throughout the organization about privacy protection, delivering helpful guidance to product teams and generating accountable records.

See “Navigating NIST’s AI Risk Management Framework” (Nov. 15, 2023).

Mock TPR for an AI-Enhanced Recommender App

Focus on the App’s LLM Integration

In the simulation, two privacy engineers conducted a TPR interview with their company’s developer of “I Love Fitness,” an imaginary app that uses an LLM to recommend and enhance user workouts.

This TPR interview included questions about the LLM vendor, said DoorDash head of technical privacy and governance Nandita Rao Narla. LLMs pose several variables with possible privacy and security implications, noted Uber senior staff privacy architect Engin Bozdag. Rao Narla and Bozdag portrayed the reviewers.

Throughout this TPR, the two reviewers identified typical action items, although tailored to LLM vendors, they noted. Topics included:

  • encryption;
  • sensitive data;
  • logging;
  • deletion; and
  • contract provisions.

The TPR also included an app developer, portrayed by Meta privacy engineering manager Roche Saje. For the purposes of the simulation, Saje displayed an awareness of privacy but purposely lacked a focus on the details.

The TPR Begins: “The More Data That We Give Them, The Better”

Reviewer Rao Narla asked app developer Saje to describe the data collection, the data flows and what architecture and microservices that her app uses.

The app collects “the user’s date of birth, their gender, their location. We’re also going to get their preferences like ‘I like 80s music’ or ‘I really like rock climbing,’” Saje stated. The app will “take all of those preferences, as well as any biometric data we have from previous workouts, and send it right to the LLM, because they told us that the more data that we give them, the better the recommendation,” she said.

The app sends the user’s request for a personalized workout as a payload of data to the third-party LLM service’s application programming interface (API), and the LLM synchronously sends the user an “awesome new workout,” Saje elaborated.

See “Getting Used to Zero Trust? Meet Zero Copy” (Mar. 1, 2023).

Action Item: Minimize Sharing of Sensitive Data

“Are you using any location or gender data?” Bozdag asked.

“We just send it all,” Saje replied.

Bozdag then emphasized the importance of data minimization and suggested that the app does not need to send the LLM “the precise location of the user.” He proposed that, “instead of eight decimal points, maybe we can cut it down to two? And maybe you don’t send the gender at all.”

See “A Sensitive Time for Location Data: Tips to Address New Rules and Vendor Standards” (Jan. 18, 2023).

Action Item: Assess Vendor’s Encryption Methods

“Looks like you’ll be storing a quite hefty amount of sensitive data at rest at the [LLM] vendor. Are they encrypting it?” Bozdag queried.

“I follow the documentation on the website from the LLM vendor on how to integrate with their API,” Saje answered. “It said something about encrypting. So, I did that step,” she said, then added, “should I look for something specifically?”

“Are we managing the encryption key? Are they? Are they rotating their keys? Do you have any idea on that?” Bozdag reacted.

Based on the information gathered in the interview, Bozdag advised that, ideally, the company should own the encryption key. “Some vendors offer that possibility,” he noted, warning that other vendors perilously use the same encryption key for all customers.

LLM vendors often say they encrypt data and comply with encryption standards, but companies should question that assertion, Bozdag urged.

See our three-part series on the keys to encryption: “Uses and Implementation Challenges” (Mar. 4, 2020), “Legal and Regulatory Framework” (Mar. 11, 2020), and “Effective Policies, Legal’s Role and Third Parties” (Mar. 18, 2020).

Action Item: Create Access Controls for Sensitive Data

“Do you make sure that access controls are implemented in this LLM integration?” Rao Narla asked.

“We definitely use access controls for the user data that has PII, like first name, last name, email address, phone number, Social Security number, all of that,” Saje enthused. “Then for the stuff that feels less personal, like the personalized workouts, that’s a little bit more open so that we can use it to enhance our products and services.”

“Tell me a bit more about the open part,” Rao Narla requested.

“We have a big set of unstructured data that has a lot of information about all the workouts that they’ve liked and how long they work out. We just dump everything in,” Saje said, clarifying that the data “doesn’t actually have the user’s name on it.”

Rao Narla then stressed that engineers could readily reidentify the individual and this sensitive data needed more protection, and advised that a company’s security team should implement “least privilege and need-to-know access” for a data set like this.

Action Item: Restrict Use of Personal Data to Personalization

“How will you make sure that this data that’s shared with the LLM vendor is only used for the purpose of personalization?” Rao Narla asked.

“We’ve done some work on that in the past, and what we do for a data set is assign a purpose,” Saje said proudly. “When anybody reads from the data set, they have to say that they’re using it for that purpose. If it doesn’t have the matching purpose, they can’t use it,” she explained.

Bozdag further questioned what type of data would be sent to the vendor and asked whether the company was trying to fine tune the model with the app users’ workout data.

“The people on my team who actually know a lot about LLMs said that we have to do this in a two-step process. First, we train the model with data, and then we send the specific user information” for personalized recommendations, Saje explained. Some pre-recorded workouts are user favorites, so “we thought [those recordings and user likes] might be a good set of training data,” she continued.

The interviewers recommended that the developers speak to product counsel to ascertain whether it was okay to provide all of that user data. They further advocated for employee education about purpose limitation for the engineer. Although that is not a typical action item for TPRs, it would be useful in instances like this, Rao Narla added.

See “Key Legal and Business Issues in AI-Related Contracts” (Aug. 9, 2023).

Action Item: Call the Product Counsel to Minimize the LLM’s Logging

“What is the vendor logging on their end?” Bozdag asked.

“I would guess they are logging everything” because they send debugging reports, Saje said.

“If they’re logging the input and output of all of your prompts, that could contain pretty sensitive information,” Bozdag told her.

Bozdag then reiterated that the developer should consult with the product counsel, adding logging to the discussion about data use. He urged developers to be more persistent than Saje, who had told him she had sent it to her product counsel but moved ahead because she did not want to be slowed down by counsel who was very busy and had not gotten back to her yet.

See “Expedia and Lululemon Privacy Pros Discuss Scaling Vendor Contracting for New Privacy Laws” (Apr. 19, 2023).

Action Item: Forward Data Deletion Requests and Validate

“What is your plan to delete the app data from offline databases [when users] demand?” Rao Narla asked Saje.

“This isn’t the first time that somebody asked us about this because we are GDPR compliant,” Saje replied. When a user makes the request, “we have a trigger in the offline database, which deletes it right at that moment.”

Bozdag and Rao Narla then stressed that deletion raised multiple issues, and recommended that the developer and product counsel:

  • determine if the vendor offers an API to streamline and protect the deletion requests;
  • insist the vendor send confirmation of deletions; and
  • check that vendors do not charge for each forwarded deletion request.

The company’s engineers should register every new database internally and externally to receive deletion signals, Bozdag added.

Action Item: Enhance LLM Vendor Review and Conduct Separate Security Review

“Have you ever heard of things like prompt injection effects, secure output handling, training data set poisoning – does any of that sound familiar to you?” Bozdag asked, raising the LLM’s vendor security as a final TPR issue.

“I definitely heard about it, uh, in this meeting,” Saje replied.

This TPR had included some security questions, but Bozdag recommended reviewers and developers arrange to have the company cyber team conduct a separate security review before the TPR to avoid wasted effort. A deeper dive on security issues is usually warranted, he continued.

He also recommended being more diligent in arranging vendor security reviews than Saje had been. She had not yet set up the review, claiming, “I’m talking to them tomorrow.”

Recommendations for Conducting TPRs

Do Not Count on Automation

Automated tools typically do not address a TPR’s core tasks, Averitt reported. Engineers may yearn for automation, but the quality of the TPR is rooted in asking questions that elicit clear explanations of features, the path of data and especially integrations. It would be difficult to come up with an automated way of asking the technical privacy questions he posited “because a lot depends on what feature I’m reviewing,” he said, adding “I don’t know what questions I’m going to ask until I actually open up the” product’s design with the developers.

Some privacy engineers have managed to automate compliance monitoring directly into developers’ infrastructure, which speeds review of “lower complexity questions” in TPRs, Saje revealed. Adherence to data retention schedules is an example of dataset monitoring that can be automated, she said.

Code scanners that detect personal data also can help with TPRs, Bozdag observed, but warned that “sometimes data flows are not reflected in the scanned code. Or what the engineer is providing is not 100 percent accurate. For the foreseeable future, we will have a combination of manual review supported by automated tools,” he summarized.

See “Checklist for Selecting Privacy Tech Solutions” (Nov. 1, 2023).

Gather and Document Evidence

Having records to show European and other regulators is a crucial goal of TPRs, Averitt contended. Companies use a variety of platforms for TPR record-keeping, whether off-the-shelf like Jira software or homegrown, but all should be a “sole source of truth,” he urged.

Reviewers typically produce a system diagram, which shows data flows and the measures protecting data along the way, said Saje. Evidence of controls might be “a screenshot of code that day or some ongoing monitoring,” she added.

Find Champions

Find and encourage privacy champions among engineers, Averitt encouraged. Some may qualify to conduct TPRs while others can spread awareness and reduce friction when reviews occur.

See “Hallmarks of High-Impact Compliance Programs and Compensation Trends for Compliance Officers Who Implement Them” (Sep. 25, 2019).

Streamline and Prioritize

With the proliferation of coding-assistant tools and top-down pressure to quickly insert AI features in applications, the amount of needed TPRs will be a problem across industries, audience questions for the panel indicated.

Once a company has conducted some TPRs, subsequent reviews might be able to build on earlier ones and look only at new elements, Rao Narla suggested This is likely only in those companies that have established procedures for product developers to integrate privacy early, she qualified.

Redundancies in TPRs is a challenge in larger organizations, Averitt noted. “In a giant company a lot of siloing happens and not a lot of communication between teams,” he observed.

Streamlining and making connections across units is one of Saje’s top tasks, she added.

For bigger products or projects that will impact many of Uber’s units, the privacy engineers ensure stakeholders see the TPR report at the same time, Bozdag said. “Also, we have an office hour” to discuss or answer questions about a TPR, he revealed.

When it comes to volume, Saje noted, ultimately “there is no magic sauce here. You got to prioritize.”

Compile Metrics About TPRs

With accountability a main TPR aim, keep track of TPRs’ effectiveness – but use metrics that capture impact, Averitt urged. “Counting how many reviews [a reviewer] has done or how long reviews took” does not accurately convey privacy improvement, he pointed out.

It is better for a technical review team to document that “we prevented this amount of privacy incidents by working in these controls,” Averitt said.

Reviewers also can tally the percentage of TPRs that produce a finding or recommendation, explained Bozdag. “Ideally, this metric should lower as your TPRs increase,” and it helps the company know it has begun to better embed privacy in product development, he added.

Record the confidence of the TPR subjects afterward, Saje suggested. “Ask the product team if they have greater confidence that their privacy is preserved from working with privacy engineering. Do they now feel like, ‘oh, this is actually watertight?’”

Reviewers can offer action items in a collaborative spirit rather than casting judgment on engineers who are juggling challenges. “People want to do the right thing, they just don’t know how,” Saje said.

People Moves

Venable Expands Cybersecurity Practice in San Francisco and Washington, D.C.


Venable has broadened its national security, international policy and IT expertise with two new additions. Davis Hake has joined as senior director of cybersecurity services in San Francisco, and Adam Dobell will serve as director of global security and technology strategy in Washington, D.C.

Hake guides clients through the changing IT risk landscape by advising on cyber risk, technology policy and go-to-market strategies. A key player in AI and cybersecurity coalitions, he has extensive experience developing products and strategy in the security and B2B SaaS markets.

Prior to joining Venable, Hake co-founded an international cyber insurance startup. Earlier in his career, he managed cybersecurity product marketing for a multinational cybersecurity company and held several government roles. As a director on the National Security Council, Hake led cyber incident responses for both classified and unclassified federal IT networks. In his role within the U.S. Department of Homeland Security, he managed cyber operations, interagency coordination and policy for high-ranking government officials.

Dobell focuses his practice on matters of national security and international policy, specifically in the Indo-Pacific region. He assists clients with policy developments, understanding geopolitical context, executing successful advocacy campaigns and building relationships with key policymakers in regions of interest.

Dobell most recently served as the first secretary for the Department of Home Affairs at the Embassy of Australia in Washington, D.C., where he regularly engaged with White House officials, congressional stakeholders and industry representatives to further Australia’s national security interests. He provided guidance on policy issues at the nexus of homeland security, technology and cybersecurity, including critical infrastructure protection, countering violent extremism, lawful access to encrypted data and telecommunications security. Dobell also advised senior department officials on international strategy as an executive officer. Additionally, he previously worked for the Australian Department of Immigration and Border Protection’s Pacific and Transnational Issues Branch, managing engagement on national security and border security policy with governments in the Pacific.

For insights from Venable, see our two-part series on legal and ethical issues in the use of biometrics: “Modality Selection, Implementation and State Laws” (Feb. 21, 2024), and “FIDO, Identity-Proofing and Other Options” (Feb. 28, 2024).

People Moves

Tech and Data Partner Rejoins Moses Singer


Moses & Singer has announced the return of Liberty McAteer as a partner in its intellectual property and AI & data law groups in New York. McAteer rejoins the firm from FreeWire Technologies, Inc.

McAteer focuses his practice on a variety of technology matters, including cybersecurity and privacy law compliance. He represents clients in cleantech manufacturing, machine learning and AI powered products, fintech, ed-tech, health-tech, eCommerce, IoT, cloud products, mobile software, enterprise SaaS, social media, audiovisual media-sharing and eDiscovery. McAteer provides guidance on the entire lifecycle of technology organization, from formation and finance through growth and governance, to exit.

Prior to rejoining Moses & Singer, McAteer served as deputy counsel at FreeWire Technologies, Inc. Earlier in his career, he worked as in-house counsel at a group of software startups.

For insights from Moses & Singer, see “Making Sense of Evolving Regulations, Recent Enforcement Efforts and Antitrust Claims as to ESG Investing in the U.S. and E.U. (Part One of Two)” (May 10, 2023).