The Shifting Privacy Left Podcast

S3E11: 'Decision-Making Governance & Design: Combating Dark Patterns with Fair Patterns' with Marie Potel-Saville (Amurabi & FairPatterns)

Debra J. Farber (Shifting Privacy Left) Season 3 Episode 11

In this episode, Marie Potel-Saville joins me to shed light on the widespread issue of dark patterns in design. With her background in law, Marie founded the 'FairPatterns' project with her award-winning privacy and innovation studio, Amurabi, to detect and fix large-scale dark patterns. Throughout our conversation, we discuss the different types of dark patterns, why it is crucial for businesses to prevent them from being coded into their websites and apps, and how designers can ensure that they are designing fair patterns in their projects.


Dark patterns are interfaces that deceive or manipulate users into unintended actions by exploiting cognitive biases inherent in decision-making processes. Marie explains how dark patterns are harmful to our economic and democratic models, their negative impact on individual agency, and the ways that FairPatterns provides countermeasures and safeguards against the exploitation of people's cognitive biases. She also shares tips for designers and developers for designing and architecting fair patterns.

Topics Covered

  • Why Marie shifted her career path from practicing law to deploying and lecturing on Legal UX design & combatting Dark Patterns at Amurabi
  • The definition of ‘Dark Patterns’ and the difference between them and ‘deceptive patterns’
  • What motivated Marie to found FairPatterns.com and her science-based methodology to combat dark patterns
  • The importance of decision making governance 
  • Why execs should care about preventing dark patterns from being coded into their websites, apps, & interfaces
  • How dark patterns exploit our cognitive biases to our detriment
  • What global laws say about dark patterns
  • How dark patterns create structural risks for our economies & democratic models
  • How "Fair Patterns" serve as countermeasures to Dark Patterns
  • The 7 categories of Dark Patterns in UX design & associated countermeasures 
  • Advice for designers & developers to ensure that they design & architect Fair Patterns when building products & features
  • How companies can boost sales & gain trust with Fair Patterns 
  • Resources to learn more about Dark Patterns & countermeasures

Guest Info

Resources Mentioned:

Send us a text



Privado.ai
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.

TRU Staffing Partners
Top privacy talent - when you need it, where you need it.

Shifting Privacy Left Media
Where privacy engineers gather, share, & learn

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Copyright © 2022 - 2024 Principled LLC. All rights reserved.

Marie Potel-Saville:

The problem lies in the fact that, because everybody, all humans, are going to react in the same way, well then, it makes us predictable and then it makes us manipulable. So, that's how dark patterns were created. In a way, you know, it's a manipulation of our cognitive biases. It's playing on these human weaknesses to make us do things without realizing, or even against our interests.

Debra J Farber:

Hello, I am Debra J Farber. Welcome to The Shifting Privacy Left Podcast, where we talk about embedding privacy by design and default into the engineering function to prevent privacy harms to humans and to prevent dystopia. Each week, we'll bring you unique discussions with global privacy technologists and innovators working at the bleeding- edge of privacy research and emerging technologies, standards, business models and ecosystems. Welcome everyone to The Shifting Privacy Left Podcast. I'm your host and resident privacy guru, Debra J Farber.

Debra J Farber:

Today, I'm delighted to welcome my next guest, Marie Potel-Saville, an esteemed lawyer and impact entrepreneur. After a decade of working in top-rated law firms, she moved in-house to become General Counsel of the EMA region for several global companies, like Chanel. Marie is the Founder of Fair Patterns, a solution for detecting and remedying large-scale dark patterns. She's also Founder and CEO of Amarabi, an award-winning innovation studio specializing in ethical, privacy-friendly, and age-appropriate design; and, she's Host of The Fighting Dark Patterns Podcast. Marie lectures internationally on human-centered law and legal innovation through design and is a member of the European Data Protection Committee's Expert Support Pool on Dark Patterns. So, as you may surmise, today we're going to talk about dark patterns how to make sure that you're only creating fair patterns and not dark patterns when you are building a new product or feature.

Marie Potel-Saville:

Thank you so much, Debra. It's a pleasure to be here with you.

Debra J Farber:

Oh, I'm so excited. I think this is a really great topic that people might know about at a really high level, but haven't had the time to really uncover all of the interesting literature and research around dark patterns and fair patterns. So, I'm really excited to have this conversation with you today. Maybe just tell us a little bit about how you ended up transitioning from practicing law to focusing your career on legal design and fair patterns.

Marie Potel-Saville:

Yeah, it's been an amazing journey - I have to say - and I enjoyed every single moment. I guess, I'm at the time of my career where, you know, everything I've done previously fully makes sense now. It's like all of the various bits and pieces now fully click all together and I'm able to bring my whole self to work, namely because I sort of invented my own job, which helps. But basically, to answer your question, I guess the beginning of my career was extremely traditional. You know, I was in big law, the usual suspects like Creshfields, Allen and Overy in London, Paris, Brussels, etc. Then, I moved in-house, as you mentioned, and I guess this was the beginning of the journey, to the extent that it made me realize the huge gap between the legal advice that you provide as an external counsel and what it becomes on the field in real life. The answer is not very reassuring. Basically, the law, within the company, is absolutely not understood. It's even rejected as a pure business constraint - something that's going to slow everybody down. I really wanted to change that. It's really because I'm in love with the law that I wanted to give the law back it's place that it deserves in society, but in companies as well. I wanted really to change those business constraints into empowering tools to provide efficient solutions, really. And so, in practice, what I did was that in my own legal division - I was VP Legal EMEA at Estée Lauder

Marie Potel-Saville:

at the time; -I I was covering 70 countries, 30 different brands, with a team of four lawyers, which was an interesting concept; and so, I started experimenting, leveraging Stanford's Legal Design Lab. At first, I think it was very clumsy on my side. I was just trying and testing and learning. But, what was really interesting is that, as clumsy as it was, it produced amazing results so that my fellow VPs, like the brand VPs and the marketing VPs, et cetera, would knock on my door saying, o"Oh look, I didn't know that law could be so engaging, and they were referring, for example, to competition law training, which is not naturally fascinating to marketing people, to say the least. And they were like, o"h, this is really interesting, can you train my team " team? " And so that encouraged me to actually properly train in innovation by design. I did a master's degree for 18 months and then I set up the company because I really wanted to share as widely as possible this new methodology, basically applying design to the legal arena to solve users' issues and basically bridge the gap between the law and their users.

Debra J Farber:

So tell us about your legal design firm, Amurabi, which focuses on innovation by design. What do you really mean by that?

Marie Potel-Saville:

Well, I mean that the law is not doomed to be impossible to understand, totally inaccessible. You know all the walls of jargons that you hit online each time you click on terms of use or terms and conditions or privacy notices, etc. It's not doomed to be this way. There is actually a rock solid methodology that enables us to transform all of these walls of jargon into content and information that everybody loves to read, that everybody truly understands and that empower users to really better understand their rights but also make their own free and informed decisions. Amurabi is an agency. We're not advisors. We don't produce fancy slides to tell you what to do; but, what we do produce, deliverables, tangible projects - for example, privacy notices that everybody would love to read, compliance programs, litigation design. . . . That's basically what we've been doing for the past six years and that's actually what led us to create our R&D Lab several years ago to specifically tackle dark patterns.

Debra J Farber:

That's really exciting. What exactly are dark patterns? I think it makes sense to define that before we go any further. I'm sure there's several different definitions, but how would you sum it up for the audience today in our discussion?

Marie Potel-Saville:

Of course. So basically, a dark pattern is an interface that deceives you or that manipulates you to make you do something you didn't mean to do and that could even be against your interests. Does that make sense?

Debra J Farber:

It does. So, how are dark patterns different from deceptive patterns? I mean, we hear this. . . both . . . language when it comes to negative things about design when it comes to privacy and data protection and other things. So, let's distinguish that: dark patterns versus deceptive patterns.

Marie Potel-Saville:

So, basically, Harry Brignull coined the term dark patterns nearly 14 years ago to describe what I just explained: tricks that make you do things you didn't intend to do. That was 14 years ago, and, along the way, Harry has really led the charge against deception and manipulation online in an absolutely brilliant way. He's now authored a book, which I can't recommend enough, called Deceptive Design Patterns. To answer your question, it's actually the same - dark patterns and deceptive patterns. It's the same concept.

Marie Potel-Saville:

Simply, Harry, with whom we have the pleasure to collaborate by the way, Harry was conscious of the fact that the term dark pattern could potentially be misinterpreted as associating dark, you know, with something negative. So, to avoid any possible misunderstanding, he had this term evolve into deceptive patterns. That's merely a precaution, and I fully subscribe to that precaution because, obviously, the intention is not to offend anyone. On the contrary, it's really a human- centric approach. That's all about protecting humans, that's for sure. So, he's had this term evolve; but, quite frankly, it really doesn't matter how it's called. What matters is to be aware of the reality of the manipulation and the deception online and to change the situation.

Debra J Farber:

Great. I mean, that's really helpful. I think now would be a great time to talk about your project, Fair Patterns (which anyone listening here, you could access it by going to fairpatterns. com). You not only created Fair Patterns with your team, but validated your work by deploying these patterns on a project with King Games. So, that's the maker of Candy Crush Saga. I think everybody pretty much knows that game; and that won you the IAPP award for Most Innovative Project, which is so exciting. Congratulations!

Marie Potel-Saville:

Thank you so much. Yeah, it was a real surprise for us, to be honest, That was back in 2022. To be named Most Innovative Privacy Project for basically the rest of the world, apart from the U. S. was a huge surprise for us.

Debra J Farber:

But so exciting. Tell us, what is Fair Patterns and what motivated you to found that project along with King Games?

Marie Potel-Saville:

Sure. Fair Patterns is a solution that fights against dark patterns by automatically detecting them and by also transforming dark patterns into the countermeasure, which is Fair Pattern, which are basically interfaces that empower users to make their own free and informed choices. So, that's the concept that we've created after several years of R&D. What motivated us to create this project? It's basically all of the dark patterns that we're seeing on a daily basis in our projects at Amurabi. Each time we would transform a privacy notice or terms and conditions or any online journey, really we would see them. We would spot them and we would get really irritated, even angry at times. It's completely crazy, but you get some dark patterns that could, for example, say things like t"There are two buttons, one to accept the offer and the other to reject it. And the button to reject the offer sometimes says things like "Oh no, I would rather bleed to death to make you feel really, really bad and ashamed of clicking on it. That's how crazy it gets. And so, because we got so, so shocked by the scale of dark patterns and by the depth of this phenomenon. . . it's all around the world. It's quite deep, not just on the interfaces, it's also in the code. It's also, sadly, in algorithms; and, that's not improving. So, back in 2021, we decided to create our R&D Lab specifically to focus on dark patterns.

Marie Potel-Saville:

Two years after that, we managed to create the concept of fair pattern as a countermeasure. Maybe it's worth explaining this concept a little bit more. There has been a great amount of work done by academia over the past 14 years. So, you've got an amazing amount and quality of research on what's the name of dark patterns. What's the right name for it? What's the right taxonomy? You've got like 16 different taxonomies. What are the harms caused by this problem? Et cetera. So, basically, you've got over 10 years of problem- focused research, which is great, which is amazing; but then, very, very little research and proposals on the solution. So, there were some proposals around bright patterns or light patterns.

Marie Potel-Saville:

Basically, these initiatives were proposing to nudge users towards privacy-friendly or consumer-friendly interfaces. To be honest, we're not sure that nudge is the solution here. The reason for that is that is people don't learn anything when they are being nudged. Obviously, they just continue to blindly click. They're just directed to a supposedly ethical solution instead of the bad one; but, that doesn't really solve the core issue, which is that people have given up properly reading online and thinking online to make their own free choices. So, that's really what we wanted to change, and the whole concept of Fair Patterns is to empower users to even maybe take the two or three seconds that they need to think about it and to really make the choice that's meaningful and beneficial for them. Does that make sense at all?

Debra J Farber:

It definitely makes sense. It just highlights something I've been feeling for pretty much a lot of my career: why am I so excited about privacy and data protection? Why is that my jam, you know? Why am I so obsessed with the space. Right? A lot of it, for me, and I think generally, comes down to it's a subset of agency. You know, how much more freedom- focused could this issue be? Right? Agency: your ability to make choices about your life and what you're doing and what's collected about you. So, I think sometimes we forget that that's what privacy is, a subset of agency. And, if we're taking away people's ability to make meaningful choices, we're really taking away their agency and that should be seen as a real negative, harmful thing at scale. Right?

Marie Potel-Saville:

This is so important and thanks for putting this context back. This is so central and I know that you were at the IAPP Global Privacy Summit because we saw each other.

Debra J Farber:

Yeah, that's where we met. That's where we met at the last .

Marie Potel-Saville:

I'm sure you remember one of the speakers for the closing session explaining that the new frontier is not so much data governance or AI governance, it's decision- making governance. How do we make sure that humans are still able to make their own decisions in a meaningful and ethical way? I think that's really the new frontier.

Debra J Farber:

Yeah, and I think that if that's not done in a good way, I mean it's pretty easy to see that that will put constraints on society's decision-making as a whole. Of course still individual decision-making, but then we're steered by big tech or other forces rather than our own individual choices. So, yes, I think that is kind of a great way to end the conference on those notes and taking that home to stew on, marinate on. Exactly. You know we're talking some meta concepts here, but why should business executives generally care about preventing dark patterns from being coded into their websites and apps and interfaces?

Marie Potel-Saville:

Yeah, you're right to bring the question back to reality, back to the field, back to what really matters. There are many, many reasons for executives to be concerned.

Marie Potel-Saville:

I will focus just on the business reasons, to be honest, because obviously there are plenty of ethical ones. But, just on the business side, what's really interesting is that we've seen that deception and manipulation online - all these dark patterns - are starting to stop being profitable. I think that they used to be profitable a couple of years ago, but as customers become more savvy, but also more demanding, they're really willing to have that super profound trust relationship with the brands they buy from. We're seeing that dark patterns simply do not work anymore, or not as well as before, sheerly in terms of profitability. And, that's really interesting because, of course, with a dark pattern - let's say for a renewal of subscription - you will likely get a very short-term boost in your turnover, but as soon as users will realize that they have been tricked, they will be furious, and rightly so. Then, they won't want to hear about you ever again or your brand. I guess that's the key point for businesses to care about it.

Marie Potel-Saville:

The second business reason is simply the bad user experience. It's terrible. We've all experienced it. Right? I mean, you get annoyed. You see them, the 45 people looking at the same room as you. We've all been through these interfaces where we are tricked into paying for a seat on the plane when we all know it should be free. I mean, it gets really on our nerves. The European Commission actually produced a very interesting study back in 2022, where they showed scientific evidence that dark patterns actually increase your heart rate and increase anxiety. I mean, as if this world didn't generate enough anxiety by itself. [Debra: wow!] The second business reason is really this bad user experience. People, they're fed up with it. And then, of course, we could talk for hours about all the other reasons. You know, the individual harms and the structural harms caused by dark patterns.

Debra J Farber:

Yeah, and I think we might even get to some of those questions. For now, I want to understand - I hear from you and your work on your website that dark patterns harm individuals and exploit our cognitive biases. So, you've kind of already went through some of the harms; but, can you describe how it's exploiting our cognitive biases? What does that mean? Give us some examples there.

Marie Potel-Saville:

Sure, absolutely. Perhaps it's useful for your audience, Debra, to briefly explain what cognitive biases are. This all stems from, basically from Ken Mann's research, who sadly passed away recently, but he's been really an amazing thought leader in neurosciences. He was also a Nobel Prize winner in economy. B asically, he's the author of Thinking Fast and Slow, which still is the authority book to date. He very clearly explained - identified first and then explained - that our brain is working by two main systems. System one, which is very fast, very efficient, but which relies on a number of cognitive biases precisely to help us make those decisions or make those choices very, very quickly. And then, system two, which is slower, which is also more energy consuming - so, the brain doesn't like that by default, it likes to save the energy. But, system two is basically what enables us to solve complex problems like a math problem.

Marie Potel-Saville:

So, back to cognitive biases. In system one, the reason why we can act very quickly and be that efficient is that in order to make a choice, we will fall back, we will resort to these cognitive biases and that's completely unconscious, obviously. We won't realize that. It means that all humans are going to react in the same way when they're faced with a given situation, a given type of information. Overall, there are 180 cognitive biases. For example, when we are faced with information overload, then the typical response is we don't read. When we are faced with a risk of loss, there's the loss aversion bias, which makes us absolutely try to avoid this loss. When we are faced with an information that is framed in a specific way, well, the cognitive bias associated with that means that we stick to that frame and we are less likely to challenge the first information that we got.

Marie Potel-Saville:

I could go like that, you know, for a long time, but that's basically what cognitive biases are and the problem with that. Well, first of all, you know, we can't avoid having these cognitive biases. That's part of being human, really. The problem lies in the fact that, because everybody, all humans, are going to react in the same way, well, then it makes us predictable and then it makes us manipulable. That's how dark patterns were created, in a way. It's a manipulation of our cognitive biases. It's playing on these human weaknesses to make us do things without realizing or even against our interests. Does that make sense?

Debra J Farber:

It does and it makes me wonder. We use law to address harms to people, especially in the United States. We have a very much - our privacy laws have been around where has there been a actual harm as opposed to rights- based. Right? Like it is in the EU. What does the law generally say about dark patterns? Do we have good, established laws around the world that address dark patterns? What is the state of the law today?

Marie Potel-Saville:

Great question. The first thing to say is that it's always been prohibited to manipulate or deceive people. It never was licit in the first place. So, even if the term dark pattern was not specifically mentioned in a number of acts and regulations, for sure manipulating someone, deceiving someone, is totally prohibited. So, that's the first thing that's really, really important. For example, in the U. S. you've got Section 5 of the FTC Act on deceptive practices, which is totally and fully applicable to dark patterns and that's basically the legal basis that the FTC is using to go after Epic Games for all the dark patterns and trying in Fortnite, leading to a huge settlement for half a billion dollars, etc.

Marie Potel-Saville:

So, that's really important. You don't need to have the term dark pattern in any piece of legislation for a dark pattern to be caught in that legislation, which means that you've got prohibitions and general prohibitions of manipulation and deception in consumer law, obviously in privacy law as well. Let's just remind everyone here that, obviously, in GDPR, there's the Article 5 of GDPR with the fairness principle (which is at the core of GDPR). Fairness means that, obviously, you can't trick anyone to obtain their personal data. That's completely illicit. Right?

Debra J Farber:

Especially as it really discusses consent like fairness around the consent.

Marie Potel-Saville:

Exactly, and what's interesting is that for the past two or three years, we've been seeing new and specific legislation that would specifically target dark patterns. So, it comes in addition to all the existing legal framework, which I just described. For example, in the U. S., the California Privacy Rights Act defines dark patterns specifically. The FTC staff report that they published in 2022 distinguishes four types of dark patterns. In the EU, we've got the Digital Markets Act, the Digital Services Act and the brand new AI Act, which also provide specific definitions of dark patterns and specific prohibitions. Basically, what it means is that the legal net is getting tighter and tighter. So, there are now multiple legal bases around the world that make dark patterns totally and utterly illicit.

Debra J Farber:

That's fascinating. It'll be interesting to see how quickly companies are ready to adapt to this growing net of laws around dark patterns as enforcement begins and what that'll look like. I know we could probably talk about that for like half a day as well. So, first I want to take the conversation really broad to talk about structural risks and our economies and such, and then I want the audience to know well, we are going to get very specific as to what do we mean about dark patterns and then how do we create countermeasures. What are the countermeasures to create fair patterns? But first, I want to really bring it to the societal impact. How do dark patterns create structural risks for our economies and then, ultimately, our democratic models? How do they impact competition and trust in brands and the overall market and such?

Marie Potel-Saville:

That's such an important question. This has been studied by many regulators. So, we've got studies by the OECD, for example, and many other regulators, like the Competition and Markets Authority in the UK, the European Commission. Basically, they all say the same: ultimately, dark patterns do affect competition, mostly because, well, if consumers are prevented from changing suppliers because they are either caught in subscriptions that they can never cancel or because they are prevented from objectively comparing prices - which is another type of dark pattern - if consumers do not have transparent, objective information online which is another dark pattern then they're not able to make the best decisions for them, and so definitely it affects competition. Also, because dark patterns can and are used to collect ever more personal data, ever more personal data, which can give large groups a decisive competitive advantage or which could even strengthen a dominant position - I'm sure you're seeing which type of players I'm referring to - that creates a true structural risk for the economy. Let's just remind everyone that the reason why today we still consider market economy as the best model to date is that it's because it's supposed to bring the best benefits to consumers, and by best benefits it is understood lower prices, better quality of services and products, and innovation.

Marie Potel-Saville:

But, what the OECD very clearly explained in its report in 2022 is that if companies end up competing by the quality, so to speak, of their dark patterns instead of focusing on innovation, low prices, etc., then it's a losing game for consumers. They are being tricked. They don't get the lower prices. They don't get the better products. It's just a losing game. And then, of course, there's the trust issue. If consumers lose trust in brands because they've been tricked, because they've been manipulated, ultimately they also lose trust in the economy at large. So, that could also endanger the whole system. And, back to your point about the democratic model, ultimately it could also affect their trust in the overall democratic system and it's even beyond that.

Marie Potel-Saville:

Our main concern, I guess, at Fair Patterns, is that once we're all trained to accept; once we're all used to click "I agree, even though we haven't read one single line, then what's the next thing that we accept blindly, without having read? From a democratic standpoint, it's an interesting question, particularly in 2024, which is a huge electoral year around the world. I think it was the Financial Times that published an article on the fact that no less than two billion citizens are going to vote this year and, of course, yeah, you've got a big election coming up in the U. S., needless to say.

Debra J Farber:

Definitely. I think that all rings true to probably all of our experiences. So. thank you for framing the societal importance, or impact, I should say, of these dark patterns. Let's get to solutions. What are the benefits of using fair patterns and what are fair patterns?

Marie Potel-Saville:

Yeah. Fair patterns are interfaces that empower users to make their own free and informed choices. It means that, instead of being tricked, instead of being left in the dark with a wall of jargon, et cetera, you are given the right information at the right time of your journey to understand the consequences of your choices and you are able to make the choice that's beneficial for you, that also matches your preferences. That's basically what a fair pattern is. Maybe it's interesting to get a bit more into concrete examples, if that's helpful. So, for example, you can have a number of default settings which are are harmful. It's the pre-ticked boxes, it's all of that. Well, the countermeasure, the fair pattern to that, is simply neutral default, where actually you don't have the pre-ticked box. You're not framed.

Marie Potel-Saville:

Back to the question of cognitive biases, it could be empty boxes. It could be individualized settings that allow the user to choose for each single case. Perhaps, as a practical example, we could describe some default setting where you have one single box for several purposes of data processing. Well, first, that's illicit, but second, obviously you know you can't decide to what you agree simply because there's one single box. So the fair pattern is simply to have several boxes that are not pre-ticked, so that you're not framed and you can have a granular consent and a distinct decision. So, for example, you could agree to receive the newsletter but not the promotional offers from the company's partners. Does that make sense at all?

Debra J Farber:

Yeah, absolutely. You're not combining all the consent into one thing so that it's binary. You're giving kind of choices to the individual as to what they want to opt in or out of.

Marie Potel-Saville:

Exactly. To stick to the privacy dark and fair patterns, very often, as regards to the privacy settings, you've got what we call a maze - something that is super difficult to navigate. You have to click, like, at least five times to make the first choice, and then you have to go back to a different page to continue adjusting your privacy settings. None of that is by chance, obviously. So, the fair pattern that solves this type of situation is simply a seamless path.

Marie Potel-Saville:

It could take many different forms, but it could be a privacy dashboard where you have all your rights and all your options at a glance on one single screen with buttons, and you can decide okay, I'm happy to share my name and my email address for that purpose to this company, but not to the third party partners. I'm not happy to share my phone number, except for deliveries, perhaps you know I want to decide on the information on how I use the services. Well, I'm okay for your company, the one I'm buying from, to use it, but not third-party partners, etc. Basically, it's really good UX. That's what I want to say. If you think about it, UX wasn't meant to trick anyone. It was meant to help users do what they intended to do in a quick and easy way.

Debra J Farber:

That makes a lot of sense, especially when you frame it that way. One of the reasons we're talking about this topic on The Shifting Privacy Left Podcast is that we want to shift addressing privacy way earlier on, before you ever collect data in the design phase, the development phase, the products, the getting the requirements for the product phase. Right? Well before you even have to think about the life cycle of data, shift into earlier in the decision-making and make sure you've got good privacy. So, privacy design and privacy UX is a huge component of that. It's the perfect conversation for today and for this show and for this audience.

Debra J Farber:

So, what I'd like to do now is your Fair Patterns project has identified - they came up with seven categories of dark patterns with, I believe it was, 16 different dark patterns that fit in those seven categories; and you could go to fairpatterns. com and see that for each category there is a definition, what are the main cognitive biases that are manipulated through that dark pattern, and then the main risks to individuals. But, I'd rather focus on today in our conversation, I'd like to go through each of those seven categories and talk about what are the fair pattern countermeasures as I walk through those. Does that make sense to you? [Marie: Sure, sure, sure]. So, I think that you already talked about the first one, this sense of a maze, where you're tricking the user and they have to go through seems like an endless number of clicks until they're able to actually effectuate what it is they want to do maybe opt out, cancel something, whatever. You mentioned that the seamless path is the countermeasure, the fair pattern, to use instead. So let's go to the second one, which is called harmful default. What is that and what are the countermeasures there?

Marie Potel-Saville:

Yeah. So the harmful default, you know, is this pre-ticked box that could also combine several purposes, and so you couldn't decide whether you agree to one purpose but not to the other, and so the countermeasure is this neutral default with the granularity of choice, basically. So, you can decide for each different purpose, what you agree with and what you disagree to. The third category is misleading or obstructing language. By the way, the way we call dark patterns really doesn't matter, at least not to us. What really matters is the solution. So, the solution to misleading or obstructing language is simply plain and empowering language. And you know what, Debra? Spoiler alert to your audience, it is totally possible to explain privacy with plain language. It doesn't have to be, you know, obscure, full of legalese, absolutely not. We do that for a living. And plain and empowering language means that anyone without a legal background, you know, without being a specialist in privacy, very easily finds the information they need; that they understand it upon first reading - that's very important; and, that they also understand the consequences of their choices. So, that could be very short and clear sentences, obviously. A neutral tone of voice, also, to avoid any emotion manipulation. But, that's also sometimes about adding some information, precisely to empower users to understand the consequences of their choices.

Marie Potel-Saville:

Then, we've got the category that we call distorted UX, and the fair pattern to that is fair UX. So, this one is really common sense, I would like to say. A distorted UX would be basically trapping users through the visual interface. So, for example, you would have something, an interface with your birth date, and then a big fat button share it with everyone, and obviously you are prompted, if not manipulated, to share this personal information, when a fair UX will be a visual interface that really respects your intention. So, you would have three equivalent buttons, for example, to decide with which companies or people you want to share your birth date.

Marie Potel-Saville:

It could be the company services, it could be your friends, it could be everyone, but you really have a choice and you've got three equivalent buttons with the same salience, the same color, and there's no visual interference that directs your choices. Does that make sense at all? [Debra: It does, thank you]. I think what's important to mention is that we've developed, not just the concept, but also a library of patterns that we're continuously improving and expanding, and so that's what's great about a pattern is that it's a system that will consistently solve the problem. And just so you know, we've been testing the fair patterns with users, also with a range of experts, independent experts to continuously improve them, and so we hope in the future to develop even further our library of fair patterns.

Debra J Farber:

Yeah, in fact it's not just like legal experts, right? You have really interesting people, like you have neuroscientists. Say the types of experts you're working with on this site.

Marie Potel-Saville:

That's a great question. So obviously, you know, this is a multidisciplinary collaboration where we've got, obviously, UX strategists, so UX designers, but also neuroscientists. This is absolutely critical, given the cognitive biases at stake. We've got plain language experts working with us within the team, obviously legal experts, privacy experts, and last year we decided to have our concept and the library of fair patterns audited or examined by 10 independent experts in the various fields that I mentioned, including psychology, so that they would give their honest, objective opinion about what works, what should be improved, and that was an amazing experience and that pushed us even further.

Debra J Farber:

That's awesome. And just to close the loop, the last three dark patterns and countermeasures are "more than intended is the dark pattern and the countermeasure is free action, and you can learn about all these on the website. The sixth one is push and pressure, and then the countermeasure would be non-intrusive information. And then the last is the dark pattern would be missing information and the countermeasure would be adequate information. Again, you could find more information on fairpatterns. com. I'm also going to ask in a little bit where else people can learn about this, but for now, given that we've got privacy engineers as an audience base for this show, what advice do you have specifically for designers and developers to ensure that they are designing fair patterns as they're building products and features?

Marie Potel-Saville:

Sure, this is a very, very important point, because everything you know goes to practice and practical solutions. So our advice for designers is basically to go through a series of short and simple questions. This is a framework that we've developed with Harry Brignull. He joined us in January, which is amazing. Together, Harry and I, we've developed this fairly simple framework that we called: Does My Design Contain Harmful Choice Architecture?" And this was based on The Seven Stages of Action Model by Don Norman, obviously, who needs no introduction to designers. So, we took those seven stages and applied them to harmful choice architecture; and, basically, for when you start the product design phase, you have the perception, then the comprehension, then a number of steps. What we advise designers to do is, at each of these steps, ask themselves five simple questions at each of the stages of the user experience.

Marie Potel-Saville:

The first question is Autonomy. Does my design allow users to make decisions based on their preferences? The second question is Agency. Back to your point, Debra, does my design allow users to take the actions they want easily, without coercion? The third question is Transparency. Does my design provide sufficient objective, accessible information in plain language for users to make informed decisions. The fourth question is Honesty. Does my design contain misleading information or omissions that could induce false beliefs? And the fifth question, which is probably my favorite, is Fairness. Is my design likely to cause an outcome that's favorable to the business but detrimental to the users? And so, with these five fairly simple questions, we actually catch most of the legislative and regulatory framework. It seems quite simple when I say it like that, but this is actually the result of tons of legal research to make sure that we catch GDPR, old privacy laws currently tackling dark patterns, et cetera. So, we hope that's helpful to designers.

Debra J Farber:

That is really helpful and I think you framed it really well for why designers and developers should care. Those personas, the designers and the developers, they, a lot of the times, are going to be driven by executive and marketing's desire to boost sales through these interfaces. Right? But, we want to do that without dark patterns. So how can companies boost sales with fair patterns?

Marie Potel-Saville:

Well, the great news is that we've been working hard with economists and we've been studying plenty of econometrics studies, and the good news is that fair patterns produce better results than dark patterns after six months.

Marie Potel-Saville:

It's true that, for a very short amount of time, dark patterns might be more profitable; but, that's really only two or three months. The curve really changes as soon as six months afterwards, when fair patterns are actually as efficient economically as dark patterns. Then, after six months, they become more profitable. We've been seeing that in the media sector, for example, where subscription-based models clearly show that a fair pattern is going to be more efficient, simply because people actually don't want to subscribe, or subscribe less when they suspect a dark pattern. So, they will just refrain from subscribing. When there's a fair pattern, they are more inclined to subscribe because they know they won't be tricked, and so that makes trust and the customer lifetime value that derives from trust, that makes fair patterns way more profitable - not just in the middle term, but obviously in the longer term, because it's also boosting the value of your brand instead of damaging it.

Debra J Farber:

That's really compelling data. I would love for you, after this call, if you could send me the [inaudible].

Marie Potel-Saville:

Yeah, of course.

Debra J Farber:

Let's get this in the hands of those who are designing the UX and UI and developing new products and features so that they could push back against marketing teams and executives that are trying to push them into creating dark patterns. Then, if you could have these compelling metrics that, "Hey, we could boost sales better with fair patterns. I think that that could really do a lot of good for well society as a whole. But first, to push back on management with what is ethical, right and going to make them more money. It's pretty compelling. Then, also make sure to share with me the document you were saying that you helped create.

Marie Potel-Saville:

The framework.

Debra J Farber:

Yes, I would love to put that in our Show Notes. So, where can our listeners learn more about dark patterns besides, you have a wonderful podcast called Fighting Dark Patterns. People should check that out. And then, of course, the Fair Patterns website. But, in terms of categorizing them and showing how they work and spread and which risks they create and which laws they breach, where would you direct people to learn more?

Marie Potel-Saville:

So, without hesitation, the go-to source is Harry Brignull's website Deceptive Design Patterns. It's a goldmine of information. Again, you know he was the one inventing the term. He's been really leading the research since then and his website is a goldmine of information, as well as his book. Let's remind that he authored a book last summer. It's super easy to read. It's really not just for designers, it's for everyone, and I cannot recommend it enough In terms of online information.

Marie Potel-Saville:

There's also the dark patterns tip line if you want to report a dark pattern. So there's a hall of shame on Harry's website. There's also a tip line in the, specifically in the US, where you can report dark patterns. And then, of course, you know you can and you should just tag the companies. If you see an interface that very much looks like a dark pattern, just tag them on social networks with the hashtag "dark patterns. Ask the company, make them accountable for what they produce. This really has an effect, by the way. We know for a fact that regulators are really tracking those comments on social media with the hashtag "dark patterns. It could also prompt regulators to you know, engage, to launch a legal action or to ask questions, to launch an investigation. So, it's really worth spending those few minutes to post on socials if you see one.

Debra J Farber:

Oh, that's great. It feels empowering in a world where you feel like you can't control everything. You can actually take some action that might affect some change in that way, so that's awesome. What is the best way for people to reach out to you to learn more about Amurabi's legal design services or fairpatterns. com?

Marie Potel-Saville:

There's our Amurabi. eu website. There's fairpatterns. com. And, I'm always happy to share thoughts on LinkedIn. I'm very easy to find on LinkedIn. Just tag me or reach out to me, and I'm always happy to have a chat.

Debra J Farber:

Excellent. And lastly, I usually ask this for most of my guests, do you have any words of wisdom to leave our audience of privacy engineers with today? Or, last words of wisdom; you've given us plenty of words of wisdom.

Marie Potel-Saville:

I guess, what I really want your audience to take away is this decision-making governance. You know we've been talking a lot about dark patterns and interfaces, but there are also dark patterns in AI, thanks to AI, but also within AI itself, within the algorithms themselves. This is so critical for privacy. If we're no longer able to make our own decisions, to make meaningful choices for ourselves, it's the end of privacy, simply. So, decision-making governance that's really what every privacy engineer should be working on.

Debra J Farber:

Thank you so much. It's been a really fascinating conversation. Thank you for joining us on The Shifting Privacy Left Podcast. Until next Tuesday, everyone, when we'll be back with engaging content and another great guest. Thanks for joining us this week on Shifting Privacy Left. Make sure to visit our website, shiftingprivacyleft. com, where you can subscribe to updates so you'll never miss a show. While you're at it. if you found this episode valuable, go ahead and share it with a friend. And, if you're an engineer who cares passionately about privacy, check out Privado: the developer-friendly privacy platform and sponsor of this show. To learn more, go to privado. ai. Be sure to tune in next Tuesday for a new episode. Bye for now.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

The AI Fundamentalists Artwork

The AI Fundamentalists

Dr. Andrew Clark & Sid Mangalik
She Said Privacy/He Said Security Artwork

She Said Privacy/He Said Security

Jodi and Justin Daniels
Privacy Abbreviated Artwork

Privacy Abbreviated

BBB National Programs
Data Mesh Radio Artwork

Data Mesh Radio

Data as a Product Podcast Network
Luiza's Podcast Artwork

Luiza's Podcast

Luiza Jarovsky