concept of machine learning or artificial intelligence technology, graphic of robot hand with symbol AI and futuristic element

Plunging into the unknown: companies should look to sound privacy practices as they integrate AI

Plunging into the unknown: companies should look to sound privacy practices as they integrate AI

October 24, 2023

Plunging into the unknown: companies should look to sound privacy practices as they integrate AI

By: Nicole Kardell

How would you like to dive off a cliff with no idea how far you will drop, how deep the water is, and zero training on how to properly position your body to minimize impact once you hit the water?

That’s how we approach artificial intelligence these days. Or perhaps more aptly put: How would you like to follow a bunch of lemmings off of a cliff…?  Maybe “lemmings” sounds a bit harsh (and apparently they do not in fact follow each other to their demise), but the truth is, we don’t know really know the potential consequences (good or bad) of AI and machine learning.  And yet we all are taking the plunge—blindly and enthusiastically.

We need to be more thoughtful and circumspect about how to properly adopt and adapt AI and ML. We would benefit from guidance and a well-reasoned legal framework.

Lawmakers across the globe seem to know this. And regulating AI is a hot topic among U.S. federal and state legislators (see e.g., here and here) and regulators elsewhere (see e.g., here about the EU AI Act or here about AI principles adopted in Japan). But without any laws in place, what should companies do to ensure that they integrate AI in a manner that will not cause future legal challenges or reputational harm. Importantly, as laws come into play (and they are imminent), what measures should companies undertake now to minimize the compliance costs down the road?  Every business contemplating how AI will benefit them needs to factor internal governance as regulation is almost guaranteed.

If and as you incorporate AI into your business model (based on current trajectory, AI will be as prevalent and important to businesses as the internet), you should prepare for regulatory oversight and related compliance measures.  You should recognize concerns over harnessing AI and ensuring ethical standards are incorporated into your use of AI.

Several organizations have published studies to help provide interim guidance where no legal framework exists. These studies are definitely in demand, as companies look to establish internal policies that mitigate risk in AI and ML adoption.  Having reviewed a few of these studies, we think a great place for businesses to start is to build off of governance systems they already have in place–most notably privacy and information security frameworks.

The International Association of Privacy Professionals (IAPP) published a study earlier this year—their Privacy and AI Governance Report—which outlines how businesses have and can build an AI governance system upon a privacy framework. As the study notes, “more than 50% of organizations building new AI governance approaches are building responsible AI governance on top of existing, mature privacy programs.”  The IAPP study highlights key principles that should be incorporated into an AI framework: privacy, accountability, robustness, security, explainability, fairness, and human oversight. To the privacy professional, all of these principles are familiar terms necessarily incorporated into privacy policies and procedures.  When it comes down to it, privacy is at the heart of ethical concerns over AI adoption: when personal data is the basis for AI and ML algorithms, what is done with that data is a privacy concern:

  • Will the outcomes generated by AI algorithms adversely impact individuals?
  • Will individuals be ok with their personal data being collected and processed for AI/ML purposes?
  • What are the security risks to individuals when their data is being collected and processed for AI/ML purposes?

As the IAPP points out in their study, privacy laws are already in place to address these concerns:

  • Non-discrimination, especially when it comes to automated decision-making, is a part of privacy laws across jurisdictions.
  • Transparency – i.e., telling people what personal data of there you collect, why, and with whom you are sharing that data also is required across jurisdictions’ privacy laws, with many of these laws requiring you to obtain impacted individuals’ consent.
  • Data security is also a consistent privacy law requirement, as companies must ensure that they have sufficient security measures in place to protect personal data against unauthorized use or access.

There are a host of cross-over practices between privacy and AI/ML for companies to consider as they develop an AI governance system.  Most importantly, privacy by design – a requirement under the GDPR that companies “bake” data privacy into their data use practices – is a helpful template for AI “ethics by design.” Companies need to establish ethical principles to frame their data use and infuse those principles into their AI practices.  How companies go about “privacy by design” can help inform them on how to approach AI governance, including which internal stakeholders to involve.

We encourage companies that are taking the plunge into AI/ML to do so with the right tools, including building off of their privacy framework (and if they have no such framework in place to work on developing AI and privacy governance asap).

Nicole Kardell

Nicole Kardell

Nicole is a certified privacy professional with expertise in European privacy law (CIPP/E), in particular GDPR. She helps companies navigate the changing face of privacy regulations and keep their business practices and partnerships in compliance with the law both domestically and abroad.

Related Practice(s)
Other Posts
Ad-Tech Europe: The Moving Target Marking Targeted Advertising
Apr 26, 2024

Ad-Tech Europe: The Moving Target Marking Targeted Advertising

By: Nicole Kardell
Ready, Set, Go: More States Adopt Privacy Laws
Mar 21, 2024

Ready, Set, Go: More States Adopt Privacy Laws

By: Nicole Kardell
OpenAI’s Legal Troubles Mount as New York Times Lawsuit Escalates Alongside SEC Investigation
Mar 4, 2024

OpenAI’s Legal Troubles Mount as New York Times Lawsuit Escalates Alongside SEC Investigation

By: Jake Gray
Ding Dong – The Police Want Access to Your Doorbell Footage. Can They Get It?
Feb 16, 2024

Ding Dong – The Police Want Access to Your Doorbell Footage. Can They Get It?

By: Abbey Block

Subscribe to Ifrah Law’s Insights