The Shifting Privacy Left Podcast
Shifting Privacy Left features lively discussions on the need for organizations to embed privacy by design into the UX/UI, architecture, engineering / DevOps and the overall product development processes BEFORE code or products are ever shipped. Each Tuesday, we publish a new episode that features interviews with privacy engineers, technologists, researchers, ethicists, innovators, market makers, and industry thought leaders. We dive deeply into this subject and unpack the exciting elements of emerging technologies and tech stacks that are driving privacy innovation; strategies and tactics that win trust; privacy pitfalls to avoid; privacy tech issues ripped from the headlines; and other juicy topics of interest.
The Shifting Privacy Left Podcast
S2E5 - What's New in Privacy-by-Design with R. Jason Cronk (IOPD)
R. Jason Cronk is the Founder of the Institute of Operational Privacy Design (IOPD) and CEO of Enterprivacy Consulting Group, as well as the author of Strategic Privacy by Design. I recently caught up with Jason at the annual Privacy Law Salon event and had a conversation about the socio-technical challenges of privacy, different privacy-by-design frameworks that he’s worked on, and his thoughts on some hot topics in the web privacy space.
---------
Thank you to our sponsor, Privado, the developer-friendly privacy platform
---------
We start off discussing updates to Strategic Privacy by Design, now in it's 2nd edition. We chat about the brand new ISO 31700 Privacy by Design for Consumer Goods and Services standard and consensus process and compare it to the NIST Privacy Framework, IEEE 7002 Standard for Data Privacy, and Jason's work with the Institute of Operational Privacy Design (IOPD) and it's newly-published Design Process Standard v1.
Jason and I also explore risk tolerance through the lens of privacy using FAIR. There’s a lot of room for subjective interpretation, particularly of non-monetary harm, and Jason provides many thought-provoking examples of how this plays out in our society. We round out our conversation by talking about the challenges of Global Privacy Control (GPC) and what deceptive design strategies to look out for.
Topics Covered:
- Why we should think of privacy beyond "digital privacy"
- What readers can expect from Jason’s book, Strategic Privacy by Design, and what’s included in the 2nd edition
- IOPD’s B2B third-party privacy audit
- Why you should leverage the FAIR quantitative risk analysis model to define address effective privacy risk management programs
- The NIST Privacy Framework and developments of its Privacy Workforce Working Group
- Dark patterns & why just asking the wrong question can be a privacy harm (interrogation)
- How there are 15 privacy harms & only 1 of them is about security
Resources Mentioned:
Guest Info:
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.
Shifting Privacy Left Media
Where privacy engineers gather, share, & learn
Buzzsprout - Launch your podcast
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.
Copyright © 2022 - 2024 Principled LLC. All rights reserved.
Debra Farber 0:00
Hello, I am Debra J. Farber. Welcome to The Shifting Privacy Left Podcast where we talk about embedding privacy by design and default into the engineering function to prevent privacy harms to humans and to prevent dystopia. Each week, we'll bring you unique discussions with global privacy technologists and innovators working at the bleeding-edge of privacy research and emerging technologies, standards, business models, and ecosystems.
Debra Farber 0:27
Today, I'm delighted to welcome my next guest, R. Jason Cronk, President of The Institute of Operational Privacy Design (IOPD), and CEO of boutique privacy consulting firm, Enterprivacy Consulting Group. He's also the author of the seminal book: Strategic Privacy by Design. Previously, Jason worked as a Technical Consultant in Verizon's Information Security Department and co-founded 3 companies. He's earned his JD with honors from Florida State University and a BS in mathematics and Certificate of Information Systems Management from The University of Rochester. I've known Jason for over a decade, and I'm delighted to have him here today to discuss what's new in privacy by design and default. And, I'm actually sitting across from him for this special episode, where we're recording live from my favorite annual privacy event, The Privacy Law Salon, in Miami. Welcome, Jason.
R. Jason Cronk 1:24
Debra, thanks for having me. It's interesting, you reading my CV like that. Some of those things I haven't heard in a while. I've been so focused on my more recent accomplishments, which you also mentioned: being President of the Institute, which we just founded about 2 years ago; my privacy by design book, which was in the works over the last couple of years; and then my consulting, which has also been kind of my life the last couple of years.
Debra Farber 1:49
Yeah, excellent. Well, I'm really glad to unpack some things with you today. So Jason, you state in your LinkedIn bio that your work at Enterprivacy Consulting Group focuses on helping companies overcome the socio-technical challenges of privacy? What do you mean by that?
R. Jason Cronk 2:06
So, privacy is not just a technical term. It's not just a legal concern. It's an organizational concern as well. So, it's about helping companies try to figure out how they interact with individuals, be they consumers, investors, employees. And again, that includes both social systems and the way people work together and interact, like I said, in terms of a business or organization and technical systems and the interaction between the two. It's a long winded academic term to say the least, but definitely just trying to expand and say this is not just a technical subject and this is not just a legal topic.
Debra Farber 2:46
That makes a lot of sense, because sometimes we couch stuff in terms of "digital privacy," and this is like a good remember...a good way to remember that it's beyond just the digital.
R. Jason Cronk 2:56
Yeah, so I talked about privacy in terms of interactions with people. Either you're interacting directly with them as we are here today; or you may be interacting with their data, which is kind of a proxy for them. And, in doing so, if you think about privacy that way, it's a much more expansive world, and you get into topics that are that are very on top of regulators mind, like dark patterns and deceptive design where you're trying to manipulate people and influence their decision-making. And the kind of the data privacy thought is, "is that a privacy issue?" but it is because, again, it's about autonomy. It's about your personal decisions about how you want to interact with the world, and if we're trying to influence them or, again, manipulate them, then that's a potential privacy problem.
Debra Farber 3:43
Yeah, that's one of Dan Solove's privacy harms: "decisional interference." Right?
R. Jason Cronk 3:48
Yes, absolutely. Or, if you were to take Woody Hartzog's "Three Pillars of Privacy," its "autonomy."
Debra Farber 3:55
Right. These days, I talk about privacy as being part of autonomy fairly often because there's a lot of conflation - and I know I talked about this way too much in all of my episodes, so I'm not gonna go into detail - but there's definitely a market conflation of freedom of speech with privacy, and I always use the example of how they're both under autonomy. They're both part of freedom, but they're not the same thing.
R. Jason Cronk 4:15
Yeah. And, if you think about, you know, kind of a lot of people for the past decade / 20 years, kind of a fundamental concern about information privacy has been notice and choice. Now, notice and choice has its problems, but the fundamental thing is about making sure people are making informed choices, i.e. having autonomy over their data. So, this all kind of stems from the same kind of underlying concept of how we as society interact with people, and what authority we give them to make decisions about their interactions, and what authority we take from them to make their decisions for them.
Debra Farber 4:54
Jason, I love your book, Strategic Privacy by Design, because it builds upon the great academic research of others to present a set of approaches, methodologies and tactics based on use cases, risks to personal data and the humans behind that data, the privacy harms to be avoided, remediation techniques, etc. And, I also know you just published the 2nd edition of the book, and it's been out several years now. Can you tell us just a little bit more about the book generally - like the whole book, not just the update - and who your intended audience is, and then what has been added to this updated edition?
R. Jason Cronk 5:32
Sure, just know that there's gonna be a quiz at the end of this podcast. No, seriously, I do run into a lot of people who, they're like, "Oh, I got your book," and I'm like, "Have you read it?" and they're like, "Oh, it's my next book to read" or, you know, and I'm like, "Well, there'll be a quiz. Let me know when you're done." But, so the intended audience is really any broadly privacy professional. I mean, it is billed as a textbook for the CIPT. But it is not necessarily a technical book. I get a lot of people commenting back to me that it was an easy read, that it was easy to digest. I use a lot of examples. So, one of the things in my writing style that a lot of people have commented on is, because I've been doing training on this topic for years, I've kind of re-integrated...like I've talked to people, it's like, "Here's this concept," and then I look in their eye, and I see what they're getting and what they're not getting and then I...for the next time, I re-envision that training so I can explain to people, and help them understand, the things that the previous students didn't get. And, that shows in my writing as opposed to me just sitting down for the first time without having to try to ever explain this to people, writing what's in my head and being complicated for people, I'm using the same analogies and information to help people understand the concept, not just, again, just kind of laying it out.
R. Jason Cronk 6:51
So, the book lays out sort of the foundational concepts; and, like you said, I leverage the work of other people judiciously and credit them necessarily. I am not the inventor of many of these concepts, but I've just found that these things have been helpful: Dan Solove's Taxonomy of Privacy; Jaap-Henk Hoepman's Strategies and Tactics for Mitigation; FAIR: Factors Analysis of Information Risk for doing privacy risks; The Future of Privacy Forum's - I think it originally came from their Harms of Automated Processing, but the harms are kind of widespread of any types of private harms (those are more of the tangible harms where Dan's are the moral harms); Lorrie Faith Cranor's Privacy Notice Design space, which is perfect as a mitigation. Jaap-Henk Hoepman's tactic is very high-level; it's like you need to inform people. Well, then Lorrie goes into more of the how do you break up informing people. You know, you use these 4 different channels or modalities, and so it's really fascinating.
R. Jason Cronk 7:56
So, now what's in the new book? A lot of people are like, "Well, do I need to get the new book if I've read the old one?" There's about 30% more content, so there's probably an extra 100 pages actually. Some of that had been re-written. I re-wrote the threats and risk. I took the kind of...even though FAIR is a quantitative risk and I really want people to do the math, I recognize that most people don't like math. So, I put that in an appendix and left the the kind of descriptive chapter in play. I've gone more into detail and how to do threat analysis and mapping out and actually diagramming the threats in more detail. And again, this takes...this is from my work in consulting and my work in training. I've seen what works and seen what doesn't and kind of re-integrated that; it wasn't a one-size-fits-all, I wrote it, and then it's done. This is an evolving field and I'm looking to evolve it.
R. Jason Cronk 8:50
One of the things I also did, I used to keep separate...like Dan's immoral harms, I called "violations" and then I called the tangible harms, I called them "harm." Now, I call them "Harms 1" and "Harms 2." They're both types of harms. I used to shy away from the term "harm" because lawyers tended to read that as kind of the damages and harms you would find in a legal case. I have also expanded on the examples. So, each chapter has a set of exercises. It goes through an example of a particular app and here's how you would apply it in real life and then here are some exercises. And, I have a whole chapter at the end with answers. Before I had questions & exercises in the book. Now, I actually provide model answers because I was getting questions about that and I wanted to show people, not just what any answer is, but what is the "model answer" from the information. And then, there is also like an appendix, there's an Appendix, which is a glossary because being a lawyer, I use terms in very specific ways. And one of my major probably pet peeves in the industry and broadly is this kind of loosey-goosey talk we talked about. There are certain terms that people throw out. My biggest one is probably "privacy risk" and people kind of using that in a very loose way and not really understanding...
Debra Farber 10:12
Risk to whom?
R. Jason Cronk 10:13
Yeah, there's all sorts of like ways they're kind of mis-managing that term. So, I lay out in the glossary, "Here are the defined terms in this and specifically how I'm meaning them." And one thing, anybody who reads my book will realize I like a lot of categories. I put things in categories; I think it's easy for people to put in their head. It seems to be the way people think is they have a section of their brain, and if they can put something in a category, it makes it easier for them to frame and go through. So, we have categories of the taxonomy of privacy harm; we have the factors of information risks; we have the strategies and tactics. And so, it's all very categorical, and it makes it easy to go through and diagnose potential privacy issues. Because again, I think a lot...what I run into a lot of training is people come up with these fantastical like privacy harms, but they're all muddled and they can't get it straight in their brain. And, if you can't get it straight in your brain, you can't develop mitigations. But, if you can kind of silo and say this is this very narrow, specific harm; here is this very narrow, specific mitigation we can do right against that particular harm, as opposed to having this very muddled concept and having this muddled relationship between that....
Debra Farber 11:31
...and then muddled communication both internal and external.
R. Jason Cronk 11:34
It's all about developing clarity of thought and kind of getting through this. Okay, that was a long winded response.
Debra Farber 11:40
But a great one, but a really good one; and I really look forward to reading the 2nd edition. I know I have one coming my way. I have read the first one multiple times. I've got highlighter all over it. I've used it for my own analysis.
R. Jason Cronk 11:54
Let me know when you read it, and I'll I'll give you the quiz.
Debra Farber 11:56
Excellent. You just want to give quizzes I'm feeling.
R. Jason Cronk 11:59
I do.
Debra Farber 12:01
Well, okay, so Jason, you've been part of several privacy by design frameworks that have recently been published. And I'd like for you to unpack the differences between some of them for our listeners. And the first one I'm going to talk about is ISO Privacy by Design, the new ISO 31700. So 31,700? I don't know how people say it. The standard for Consumer Goods and...Privacy by Design Standard for Consumer Goods and Services. It's taken about, what is it 5 years now? 4 years?
R. Jason Cronk 12:30
4 years, yeah.
Debra Farber 12:31
4 years to develop this global standard. For those who are not familiar with the ISO standard process, for each country that participates in the international standards development process, a domestic standards body runs a Technical Advisory Group (TAG) of experts. In the U.S., that standards body is ANSI, and I just remember that the "A" stands for American.
R. Jason Cronk 12:51
American National Standards Institute.
Debra Farber 12:54
And, I was actually the Vice Chair of the U.S. TAG for the standard for the first 2 years, but then needed to step down due to a change in jobs; and so I have a lot of knowledge about how it started, but a lot less about how it's going. So Jason, in your opinion, how's it going?
R. Jason Cronk 13:09
Well, Debra, you and I are in the same boat. So, I was only involved in the 1st year as a subject matter expert, and I ended up leaving. I personally got a little frustrated with the ISO process. It was...one, it was very bureaucratic, which I understand, you know, when you're trying to coordinate, you know, dozens of countries and dozens of, you know, stakeholders within each country. They have to have a very defined process. I get it; it was still frustrating. Then, even within that, just the stakeholders that were providing input, I felt didn't have enough subject matter expertise in privacy or even more particularly in privacy-by-design. Now, I am not saying that my privacy by design process is the one and only way to do it. There are many ways to proverbially skin a cat. Right? But, it seemed like a lot of the people who were participating didn't have anything. They were kind of going at it without kind of the background knowledge. And then add to that kind of the bureaucracy and it was just a frustrating experience. Now, I have not looked at the new standard. They have not publicized it as of this recording. It's coming out in a week or two. I will get a copy of it. I have looked through the outline. And this was part of my concern before is that because you had to have consensus internationally, there seemed to be a...
Debra Farber 14:31
Let me stop you there for a second. So the consensus part...each country had to come to consensus. So our United States TAG would have to have a particular perspective that they're bringing to the international discussion, and so it was very much consensus-building as opposed to what is the right thing and then let's push that forward. You had to have the political will in each TAG.
R. Jason Cronk 14:52
So, yeah, so going along with that, the problem I saw was it was kind of like least common denominator. What is the least common thing that everybody can agree on at the international level to get passed. So, I think, again, I haven't seen this final standard, but looking through the outline, which they published, it seems to be very much a kind of same old, same old privacy governance issues, FIPPs: Fair Information Practices and those sort of things. I reserved full judgment until I read it, but again, I kind of go in with a little bit of a...my eye askew I guess, or....
Debra Farber 15:29
Yeah, I guess I'm kind of in the same boat. I do remember, we were beginning to frame the entire project as one...how do you bring a product or service to market? So, it didn't only include product development. Also, I think, it included the marketing of it and, you know, all the way to retirement of the product. So, I do remember thinking that was pretty novel, but also thinking only people who pay the pretty expensive rate to get access to an ISO standard - it's not free and available to anyone; you would actually have to be a member of ISO in order to see it - and I'm wondering, do you know when it's published, I mean, will we be able to see it or only if you pay for it?
R. Jason Cronk 16:08
Only if you pay. And it's interesting, so I actually used to think - and I would disabused of this by one of the people from who was involved, Michelle Chiba from Canada who was the Editor or some role...it's apparently priced per page. That I didn't know. So, you have to pay based on the number of pages that it is. I thought it was like a set rate for every standard, but apparently not. So, what remains to be seen how much it is, and maybe I won't buy it. But, I do have a underlying need to want to see what it is. But yeah.
Debra Farber 16:42
Right. And then, the other thing I remember about it is, you know, I was thinking, "Oh, great, then there'll be a certification against it, and this is a different process through ISO, this is not like the ISO 27001/27002 where you can have a certification against, you know, the security controls.
R. Jason Cronk 16:57
Yeah. So, again, that's kind of the bureaucracy. Right? So ISO has this thing called a "conformance standard." And so, in order to put out something that an organization could conform to, it has to go through a special committee and special process and have certain terms in it. Again, not really sure because this wasn't a conformance standard. And, by the way, this is the only ISO standard I've been involved in working on. But that was something that we expressed early on, we're like, "Oh, we want it to be certifiable against" and they said "No, it would have to go through another process." Now, apparently, they could take it and go through that bureaucratic process and make it a conformance standard, but I think then it becomes...I mean, this is purely speculation, but my understanding is, so you have some standard prediction, like "You have to have X." Okay, well, what does that mean for a company to "have x?" and so they develop what are the measurements and effective controls you have to have in order to claim you do X, whatever that is. I think that's what they would need is we would not only have to build what the standards are, but what are the 17 things you have to do? But, for each one, we have to prove how would an assessor measure that? And then, how would they ensure that it was effective and all those sorts of things. So, I think that's why this is not a conformance standard.
Debra Farber 18:15
Got it. And, if we were to even to bring it to the conformance standard level, my understanding is, we would have to go through another 3-5 year process.
R. Jason Cronk 18:22
Exactly, Again, bureaucracy.
Debra Farber 18:25
And then the way I think this could be helpful, besides an internal gut-check and a framework for internally bringing a product to market in a privacy-compliant way, might be to require in your contracts with vendors and partners to comply with the standard. So, you could make it a standard you get others to comply with through contract. But, it'll be interesting to see how much uptake the standard gets given that you have to pay to look at it and there's no real, you know, mechanisms to assure against it.
R. Jason Cronk 18:54
Yeah, so one of the...one of the funny things when we were involved in that, in that first year I was involved, some of the companies that were involved large name-brand companies, you know, and I spoke to some of the people. I'm like, "Okay, if this passes, are you going to, you know, adhere to it?" and they were like, "No, probably not". And it's like, okay, so why are you involved in trying to get this, you know, get your input into it. So, yeah, I don't know what the market demand or impetus is, especially given, like you said, it's not going to be a conformance standard. So, there's no seal of approval that you can get from a third party that says you're doing...you're doing it.
Debra Farber 19:33
Right. Okay, so let's now talk to something you do know a lot more about. I know you're part of the team that created the nonprofit, Institute of Operational Privacy Design, or IOPD. The org's mission is to define and drive the adoption of privacy design standards to provide accountability and public recognition for good privacy practices. And then the website further states that the objectives of the IOPD are: 1) to maintain standards for privacy design and risk assessment; 2) hold organizations accountable through the certification mechanism; and 3) educate and evangelize the standards. So, tell us why the IOPD wanted to develop another privacy-by-design standard and...I guess I don't have to ask why don't you feel the ISO standard was sufficient?
R. Jason Cronk 20:22
Well, I mean, at the baseline, obviously, like we said, it's not a conformance standard. So it's a different animal. But there are...so for a number of years, I have been kind of ruminating on this. And when I was involved in a lot of vendor negotiations, we would talk to vendors and we say, we'd say, What are you doing for security, and they would hand us their ISO 27001 certification; they would hand us our, their SOC2; you know, CSA (Cloud Security Alliance Certification), so they would have something. And then we'd ask them, "What are you doing for privacy?" and, you know, they would like go, "What? What's that?" They would hand us their ISO 27001 and be like, "Here, look, we're securing personal data and that's what we're doing. So, there really wasn't that. So, I've been thinking about for years, like, it'd be nice if, from a business-to-business perspective, if I had the ability to say, "Give me your third-party audit related to privacy so that I don't have to do it, and I know that somebody has looked at it and you're doing something right." It may not be, you know, perfect, but at least you're doing something beyond, you know, saying you're doing something.
R. Jason Cronk 21:29
And then the second thing is I see a lot of companies that are out in the market saying, "We do privacy by design." Okay, what does that mean, right? So, they may be doing something really well; they may be doing nothing really well; they may be completely a marketing gimmick - "Hey, we've got the world's best coffee!" But, you know, "We're doing privacy by design." So what does that mean? And so I think there...so, I felt there was a need for kind of a gold standard around, "Here, now you can...you have some process to follow so that you can say you're doing privacy by design." Now, there have been other privacy by design standards. Ann Cavoukian had something up in Canada that related to her 7 principles, but again, it's kind of more organizational. So, another one we haven't talked about is IEEE - came out with their 7002, which is privacy by design and product and services.
Debra Farber 22:24
Tell us a little bit about that one.
R. Jason Cronk 22:25
So, I did a comparison of it; and again, I think, from my perspective, I'm trying to create a sort of gold standard that hold a company to a higher-level, I think the IEEE is alright, but again, it's kind of very watered-down. It was based on consensus. They had a lot of people involved. And, it just doesn't do enough from my perspective. We kept our standards committee rather small and developed a tight standard. It still took us like a year. And then, we accepted public comments, and we made some adjustments based on the public comment, and it is an evolving standard. We will we will continue to look at it. Ours is also...so this is different from say the ISO standard...Ours is the design process. So, one of the things in researching and kind of deciding to establish this nonprofit was I was talking to companies and I'm like, "Well, what do you want? Do you want a certification for a product or service; or do you want a certification for the business?" and there were different answers.
Debra Farber 23:26
The answer was, "I don't know?"
R. Jason Cronk 23:27
Yeah, well, large enterprises wanted their business processes certified, that they were building their product with privacy in mind. They didn't want to go through...they've got 100 products. They don't want to have to certify every single product. Smaller companies with single products, single services, they wanted a seal of approval said "Yes, our product has privacy built in." So, that's why we are doing kind of a bifurcated standard. We're doing a standard for the design process, which is what we finished; and our next standard that we're going to work on is for the the end product. So, in theory, if you have a product development lifecycle that is has privacy built into the design process, and then you output an app, and then that app should then be able to meet the second standard (which would be for a particular app or service). Now, the main distinction there is the design process says that you have a risk model. You understand what the risks are. You have a level of tolerance that you adhere to with regard to privacy risks, but it doesn't specify what that tolerance has to be. So, in theory, you could have a design process and say, "Our tolerance is we tolerate everything," but as long as you're going through the ruminations of: you have a risk model; you do risk assessment; you do trade off analysis...
Debra Farber 24:45
...check-the-box compliance....
R. Jason Cronk 24:46
I don't want to say that because there is some some...you know, there have to be that flexibility there. But, the end product may not meet everybody's definition of what privacy built-in is; but you at least have the procedure in place to design privacy in. The end product is going to be a little bit more difficult because we want to set what is that acceptable level. You think of like Underwriters Laboratory (UL) and electrical devices, right? They set an amount of...you know, like an effective defective rate. Right? Like 0.0% can be defective and deliver X amount over and above electricity or something. I'm kind of making up this stuff because it's not an area of expertise that I know. But, we want to define, okay...so and this is a problem I run into with discussing privacy risks with Privacy Officers or companies all the time. So, let's say you're a Privacy Officer of the company, Debra, and I come in and say, "Your product is going to result in 100 divorces a year. Is that an acceptable level of divorces?"
Debra Farber 25:53
I would have to look at a bunch of factors. I would not know based on that information alone.
R. Jason Cronk 25:58
So, a lot of a lot of Privacy Officers, they were like, "No, no. Zero divorces." That is not a reasonable response. Right? If you were in the automobile industry and you said, "I want to have zero death from my car," that's a great lofty goal; it's not going to happen unless you're not producing any cars. Right? So, you have to have some level of...you can do things to make it safer: airbags; you know, speed limiters; seatbelts...you know, all sorts of things to make it safer. But, nobody in their right mind is gonna say that the only goal you can achieve is zero deaths out of your automobile otherwise you're not going to be an auto business. So, you have to understand what a reasonable amount of risk is for your product. Again, if you're only selling 100 and you end up with 100 divorces, maybe that's a problem. But, if you're Google, and you're, you know, you have 3 billion customers, maybe 100 divorces isn't that bad. It's really, you know, in the grand scheme of things, it's kind of...again, we have acceptable number of losses. In any industry we do, we strive for better, but we can't...we can't achieve perfection. So, we have to find where that dividing line is; and so, for our end product, we have to figure out like what is that acceptable level so we don't have the problem of a company saying, "Yes, we build in privacy, but 100% of our clients result in divorces and that's acceptable.
Debra Farber 27:22
It's just a different type of harm. Right?
R. Jason Cronk 27:25
So then the question is, "how are we going to build consensus among our standards committee of what the acceptable loss rate or surveillance rate or harm rate?
Debra Farber 27:33
How did you approach that. What did you do?
R. Jason Cronk 27:35
Oh, no. We haven't gotten to that yet. That's why we pushed that off for the second standard because we need to come up with that so that it's not open ended, and so companies can't just make their own thing up. But, we also have to be mindful that zero is not realistic of whatever measurement we are.
Debra Farber 27:54
Yeah, I always used to kind of make the case for that whenever I would talk about security and people...like the difference between security and privacy professionals' behavior.I remember early when Facebook came to market, all the privacy experts went on to Facebook to understand the controls, to understand what's going on, understand.... The security people were like, just opting out. They're the type of person who's like, you know, they use Tor and they use all of these, you know, how can I minimize my exposure? Right. And so, to me, the way I expressed the differences, like, you know, you don't just like unplug from the Internet because you don't want any risk. It's like, to participate in society, you have to...you have to bear the risk. To have a business, you must bear risk. That's just fact.
R. Jason Cronk 28:42
So, interesting. I think one of the topics you wanted to discuss was risk, and if I don't know if we can transition now. So, interestingly enough...so, if I were to tell you...now this is, okay, so FAIR: factors analysis information risk, they don't talk about probability of something happening. They talk about frequency. So, the understanding that this is going to happen a certain number of times over a certain time period - might be a year, might be 10 years, might be 100 years. And, maybe it's only once every 100 years, or maybe it's 1000 times every year, but there's a frequency to whatever event now they're talking mostly about breaches and cybersecurity incidents. But, here's the thing that a lot of people don't realize, and that's what I say...there's a lot of people who are approaching privacy risk from a naive perspective within companies, and naive in a number of ways in terms of how they approach it. But let's say I were to say to you that our annual risk from whatever we're doing is a $1 million a year. What's your response? Is that good or bad or what?
Debra Farber 29:41
Compared to what?
R. Jason Cronk 29:42
Ah! Exactly? Exactly? So again, are you Google? Are you the U.S. federal government, you know, The Department of Defense, which had a $300 billion a year budget? You know, a $1 million dollars is like chump change, right? But, if you're a small business, and you know, $1 million is a lot. So, you have to be able to compare it against something, and that's the thing we don't really have from a privacy perspective is, how do we compare? Is this good or bad?
Debra Farber 30:08
Like benchmarking generally?
R. Jason Cronk 30:09
Yeah. And so so we don't have all of those. I've tried, and I've talked to a lot of people about, like, has anybody in the academic world done any research on like risk tolerance in the privacy space, and there just isn't anything, to be honest. I can't find anything where people have looked at trying to figure out what a good tolerance is. It's a little bit easier in the cybersecurity world or in other things because we can equate things with dollars. But in privacy, there are a lot of non-monetary harms that are that are not easily equatable. Now, some of them, courts have tried. Like, if this results in your incarceration, and you're incarcerated for 20 years, well, we look at well what was her earning potential over 20 years, and this is...so we'll pay her, you know, based on that, you know, earning potential and so we can translate years. Now, a lot of people would say, "Well, that's not a real good equation," but it's the best thing we kind of have to kind of remediate or provide for people. But again, if I were to say...we're in a hotel now, if somebody were to put a camera in your hotel room and you were watched for the 3 days you were at a conference, okay - privacy harm, right?
Debra Farber 31:24
Do I know about this or not?
R. Jason Cronk 31:25
No.
Debra Farber 31:25
Okay, privacy harm.
R. Jason Cronk 31:26
Privacy harm, right? Well, that's what Dan (Solove) was talking about at one point. This kind of consent has this magical property of turning a harm into a benefit. You know, if you're...
Debra Farber 31:38
Only if I'm compensated, yeah....
R. Jason Cronk 31:40
Well no. If you're paying for somebody at your house to say monitor your camera, you know, because you're an Alzheimer's patient or your at risk of something, you know, then it becomes a benefit or you're getting paid for it or whatever, versus if it's not consensual, you don't know about it, it's harmful. But point being is, so if you say your product is going to result in 100 people being watched, without their knowledge in their hotel room, what can you compare that to? Right? So when we say $1 million, people are like, "Oh, that's a big numbers." Now, granted, maybe we have considerations that big for Google or DOD or something like that, but we understand that $1 million is somewhat substantial for most people...it's worth thinking about. Where if I said, "The risk of financial harm is $10 a year, it'd be like...almost everybody would be like, "Yeah, okay, whatever." But, if I say the risk is 10 people being surveilled in there...or monitored in their hotel room...
Debra Farber 32:38
....how do you compare that? Yeah.
R. Jason Cronk 32:39
Is that good or bad or what? Yeah, so we just don't have the language to talk about this yet. It's something we're still kind of like seeking through. And it's not even...I want to pinpoint the difference. So again, this is kind of one of the things that people don't think about, and I have to disabuse people of all the time is we're not talking about tangible consequences. So, this isn't necessarily your emotional distress when you find out about it. It isn't the financial harm when somebody blackmails you. I mean, those were all certainly considerations. This is just, you know, the moral harm if somebody monitoring you in your hotel room without your knowledge. Right? And we all would agree that it's a privacy violation, even though there's no tangible consequences as a result.
R. Jason Cronk 33:24
And the problem is, if you only focus...I've been in a couple of conversations. I have an IAPP blog post about this...if you only focus on the tangible consequences, then there are things you can do to minimize the tangible consequences that don't minimize the moral harm. I encrypt the camera in your room, so you're less likely to find it with some sniffer and be upset that they're the camera I decide, you know, as a policy, I decide not to sell it. So, you're less likely to find out on the Internet. I decided not to blackmail you, so you're not going to get blackmailed. I make it a pinhole camera, so you're less likely to find it. Right? So, I've done all the things to reduce the tangible risk of tangible harm, but I haven't done anything to reduce the moral harm of me, you know, watching you in the first place.
Debra Farber 34:10
Wow. Yeah, that's a really good point and I will not be sleeping tonight. No, I mean, it's a really good point, and I haven't been thinking about that recently. That is very, very helpful to keep in mind. And before we circle onto some hot topics and what privacy or more risk....
R. Jason Cronk 34:28
Sorry, let me just add one thing. So again, I don't have a way of setting tolerance, but what I do have is a way of measuring how harmful that is. So, what I've done in the past with companies is do surveys. Right? So, we put these vignettes, these different scenarios, out and we say binary, yes or no, do you think this is a privacy harm or not and find out what people...? So, if I said to people, "Hey, a camera that you don't know about is monitoring you in your room." Most people would probably say, "Yes, it's a harm. Okay, a camera in a store, you know, watching people for theft, "Is that a privacy harm?" Most people, you know, we've gotten desensitized; we're used to that, probably say, "No." In a dressing room?
Debra Farber 35:14
Yes.
R. Jason Cronk 35:15
What if there's notice in the dressing room that there's a notice that you are being monitored for theft? Because I have been in dressing rooms and doors that say that. Right? So, that's kind of more iffy, but that's how we measure. So, we say, "This is gonna affect so many people, and if the majority of people say, "Yeah, not really a harm," then then it's probably a low-severity, low impact. So, we're able to, quote unquote measure how harmful this is. Most people say, "Yeah, this is gonna be very..." you know, "That's egregious. Don't do that!" Then, if there's a chance of that, that's a potentially high harm. I've done surveys separately in the U.S. and the EU, and it's interesting the dichotomy between the U.S. who will say, in certain circumstances, they'll say, "Yeah, that's not that bad," and the EU will be like, "Yeah, that's horrible!"
Debra Farber 36:04
Right.
R. Jason Cronk 36:04
So this goes back to...it's also who from a jurisdictional standpoint. So, privacy risk is not this objective fact; it has this subjective interpretation related to the people being affected.
Debra Farber 36:17
It's cultural. It's regional. It's... Yeah, I absolutely agree with that. And before we move off the topic of risk, do you want to also tell us a little bit about the NIST Privacy Framework and the Working Group - The Privacy Workforce Working Group? First, what is The NIST Privacy Framework? Why was it developed, and how was it helpful to global organizations?
R. Jason Cronk 36:38
So, I think the relation there is...a lot of people don't realize is...the NIST Privacy Framework is a risk-based framework.
Debra Farber 36:45
Definitely risk-based!
R. Jason Cronk 36:46
And most people coming here from a checklist perspective, they're like, "Oh, it has all the things you have to do!" But, if you're doing it right - and a little side note: I'm actually working on a book on this, so I hope to publish a book on on using The NIST Privacy Framework; and, I do training on this. I had some training last year trying to develop some training, because again, I think a lot of people approach it very naively. And understandably. There's so much going on in privacy, it's hard to like get information; but, if you really dive into it, there's a lot there.
R. Jason Cronk 37:14
So, The NIST Privacy Framework came about two years ago, and it was developed as a partner to the NIST Cybersecurity Framework, which helps companies organize their business, organize their cybersecurity program, or in this case, their privacy program. And honestly, let me rephrase that because this has been a contention recently within the discussion groups that you were talking about - The Privacy Workforce Working Group - it is not developed for your privacy program; it is developed for your organization to build in privacy because there are certain what are called "outcomes" that may not be the purview of your privacy program, but they have to be done by the organization as a whole in order to help support privacy. Dylan, who's one of the people from NIST heavily involved, and he said it really well - I can't remember; he said something to the effect of "This is not for your privacy workforce, but this is for your workforce working on privacy." So again, kind of the broad workforce. There are tangental effects on the privacy program of other things, not just the privacy program, people working.
Debra Farber 38:24
Well, what's the purpose of The Privacy Workforce Working Group within NIST?
R. Jason Cronk 38:29
Okay, so this is something they started last year. Again, it was done first in the cybersecurity world. The idea is you have these outcomes - there's a hundred outcomes, and you can...the NIST Framework is flexible, so you can add new outcomes or not. You don't have to subscribe to all of the outcomes, especially if you're a smaller organization. That's one thing that a lot of companies, they look at it and are like, "Oh, this is too much!" No, if you're a florist shop, you don't have to do everything; you have to pick and choose what's important to you. But, the idea is you have this outcome, qnd then for the organization to accomplish that outcome, it has to do tasks. So, these tasks, knowledge, and skills statements. So, your workforce has to go through and do these tasks; they have to have this knowledge to do that task; and they have to have the skills to accomplish this task. And one of the good things about this is...the idea is, if you're building a privacy program, you can go through and say "We want to achieve these outcomes. Well, here's all the tasks we have to do within our program. We need to hire people or have people within our company who are doing these tasks. Here's the knowledge that we have to put in our job recs" of the you know...what knowledge they need to bring to the company or what skill set.
R. Jason Cronk 39:39
Now, from a privacy training standpoint - this is where I really get interested - is, I'm going to, once they're finished with this, align my training and say, "Here is the knowledge and skills that my training imparts so that you can do these tasks in a privacy program with, you know, so that you can get these jobs in the future.
Debra Farber 39:59
Right, which is great, because that to me is like go-to-market messaging right there; you're meeting the needs of the market.
R. Jason Cronk 40:04
And this goes back to what I was saying earlier: I'm very categorical. I like having everything in a nice, neat little bucket; and, so this helps me say, "This training helps you with this knowledge or helps you learn this skill, which will help you accomplish this task, which, you know, will ultimately result in your organization achieving this outcome."
Debra Farber 40:25
I love it. That's great. Okay, so let's talk about some hot topics in web privacy right now.
R. Jason Cronk 40:31
Web privacy. Okay.
Debra Farber 40:33
Yeah. First....
R. Jason Cronk 40:34
There's a world...There's a world wide beyond the web.
Debra Farber 40:36
I've heard that. Well, you know, more like kind of some of the interesting things going on. So first...
R. Jason Cronk 40:42
Don't ask me about Global Privacy Control.
R. Jason Cronk 40:45
Okay, well, it's more like not specific. So, let's talk about the market trends that we're seeing as a result of Schrems II and the global tightening of the noose around the Internet cowboys who played fast and loose with our personal information, treating humans as a product and giving rise to surveillance capitalism. So...
R. Jason Cronk 41:04
That's a trigger word. Not for me. For somebody.
Debra Farber 41:07
So? "So" is a trigger word?
R. Jason Cronk 41:08
No, "surveillance capitalism."
Debra Farber 41:09
Oh! Yeah, yeah, yeah.
R. Jason Cronk 41:10
Didn't you learn anything today?
Debra Farber 41:12
Yes, yes. Today, I got in trouble in a particular Privacy Law Salon...
R. Jason Cronk 41:17
Chatham House Rules!
Debra Farber 41:17
...group. Yeah, I'm not gonna say anything...just saying the words "surveillance capitalism" and offended someone. Okay, so from a technical standards and specifications perspective, you know, there's been attempts at Do Not Track and now Global Privacy Control to kind of bring some sense of, you know, consumer choice to the web, and Do Not Track as a concept ended up going nowhere since 2012. Do you have thoughts on whether GPC will move the needle on web privacy; and, if no, we could just move on to the next question.
R. Jason Cronk 41:47
I do think it will...from the perspective of this has regulatory backing and force of law in California, which is a big market and beyond, and potentially within other state laws and maybe even GDPR. The thing, and why I was kind of joking about don't bring it up is because it is fascinatingly simple and complex at the same time, and I'm doing a talk on it with somebody in 2 weeks and we put in a couple of proposals to do talks. But, in researching the area, I am just kind of amazed at the complexity of the...so, I mean, just to give you a sample - okay, so it's a binary signal from a browser and it can come either as a web header or as a JavaScript DOM element and that is essentially supposed to signal Do Not Share my information. Do Not Sell / Do Not Share - won't get into that whole topic. So, from California perspective, you know, that is a signal that says, "Hey..." for the company... "don't share my data." But, the question is, is that...if it's coming as a default from the browser, is that an "affirmative act" on the part of the individual to opt out? And, what the California AG said with the Sephora decision is, if they have installed a browser like Brave or Abine or a plugin that sends a signal, then they have taken an affirmative step to send that. Well, what if it becomes the default in Chrome, you know, to send this and everybody is using it. Have they taken an affirmative step or now is it you know...
Debra Farber 43:29
...just a virtue of being a Chrome user?
R. Jason Cronk 43:31
Yeah, but now, and this was part of the problem with Do Not Track - is like it was the default in Microsoft Internet Explorer and companies were saying, "Well, that's not an affirmative act on the individual's part, so they aren't actually opting out. It's just the default of the browser." And, in the U.S., the law isn't "privacy by default." It's, you know, the user has to take an affirmative act to opt out. So, there's a fascinating thing there. Then the question is, so if you do accept it, are you only doing it for like web-based information, or if you have information, like a profile information, and you're sharing that with other companies, like through a data broker on the back end, does that signal opt out or they don't have to go through your, you know, your web form to opt out? So, Do Not Sell is a California creation. So, if it's in the GDPR world, what does this signal mean? Is it a denial of consent for doing certain thing or is it an opt-out of processing? And, is it an opt out of every processing, or is it only an opt-out of processing for the sharing of data, but you can do other things with it. Again....
Debra Farber 44:44
But, only from that web instance because what if you then did something different via your mobile on the same page. Right? Is Global Privacy Control even global? No. Not across all of the surface area.
R. Jason Cronk 44:56
It's a very tricky and kind of, you know, showcases the interplay between technology and legal, and it's very hard for any one side. Now, of course, I mean, my...from my perspective, I like to go back to like least common denominator. If you're building in privacy, that this should be an indication...and you should be doing privacy-by-default. Forget the law; you should be doing privacy-by-default. So, even if you're not getting a signal, you shouldn't be sharing data. I mean, nobody gonna love me for this statement, but you shouldn't be sharing data unless people opt-in or you have, you know, some legitimate interest in doing so or, you know, whatever. And, that's a whole 'nother can of worms that we won't go into.
Debra Farber 45:36
Well, I am glad I asked you about GPC because I honestly...I learned a lot; and, I thought those were great examples as to what...where some of the challenges are and it's not a panacea, for sure.
R. Jason Cronk 45:46
Right.
Debra Farber 45:47
Okay. Well, I have so many other questions I was going to ask you, but gosh, I mean, we just could talk for hours and hours. So, instead, I'm going to ask you just a general, you know, what trends are you seeing generally that you want to just kind of like, you know, opine about beyond what we've already discussed? And then, any advice to the privacy technologists out there that you just want to plug.
R. Jason Cronk 46:08
So general trends, which I've been like pounding my fist on for a couple of years, is deceptive designs. Right? So-called "dark patterns" or manipulative design and not just about...so the kind of focus from a U.S. law perspective is deceptive designs in collecting consent. But, it's any kind of deceptive design where you're trying to manipulate a user. And, this isn't about individual autonomy and manipulating people into making decisions that are not necessarily good for them, but are good for the business.
Debra Farber 46:39
Or decisions they wouldn't have necessarily made if not for manipulation.
R. Jason Cronk 46:43
Exactly. And, this is a clear privacy issue. Again, a lot of people have this kind of misconception that privacy about confidentiality of data, and they think about that. And this is...
Debra Farber 46:54
Decisional interference.
R. Jason Cronk 46:55
Yes, it's different. And so, I really want to say to your audience is like, "Look. You need to look at these other sorts of things." In the Solove Taxonomy, you have things like surveillance and interrogation that seem on their face about information, 'cause you're like gathering information by surveillance or interrogating people, but the example I use, you know, we talked about the surveillance in your room and a camera. What if I put a camera in your room, but it failed to work? But you didn't know it. I mean, it's still a violation of privacy, but my attempt to get it and the surveillance, or if it was just, you know, a peephole, right? It's still a violation of somebody's privacy even though information was not collected. I talk about interrogation. The example I always use is in a job interview, somebody interviewing a candidate and asking if they're pregnant. Right? They may not answer. It's not about information and what I do with the information, just asking the question is problematic. That asking the question is an invasion of their privacy
R. Jason Cronk 48:00
...and might change their behavior.
R. Jason Cronk 48:02
...and it's going to...it's going to create tangible side effects, but it also is this kind of moral harm that they shouldn't be doing in the first place. So, there's all sorts of privacy violations. Don't just think about, you know, confidentiality of data or personal data.
Debra Farber 48:17
I think that's a good one, yeah.
R. Jason Cronk 48:18
Oh, this was an interesting going back to The NIST Privacy Framework. One of the things I have to disabuse people of when I'm teaching about it, like...I won't ask you the question, but I do ask the question of anybody who's kind of involved. It's like, what's important from The NIST Privacy Framework? Is it personal data? Personally Identifiable Information? Personal Information? And, no matter what they say, they get it wrong. NIST Privacy Framework talks about "data" - the privacy harms resulting from data processing. It doesn't have to be personal data, personal information. It doesn't have to be unique person. I'll give you an example. This is what I've been using recently. So, New York Times published an article about Hasidic Jewish schools in New York. They were collecting taxpayer funding and the schools did some standardized testing on the on the kids - it's like elementary school, - and they 100% failed. Like, they didn't have basic English and math skills. So, The New York Times published this. Great. It's all aggregated data. It's not saying that, you know, this one person is relieved. But now you're walking down the street, you run into somebody and you're talking, having a conversation with them. "Oh, yeah, I went to this Hasidic Jewish school and now you have a judgment about them because now you know that they're in the set of people. So this is called in academic literature called "sensitive attribute revelation." So, even though I don't know the individual, at the time when it was released, they released a sensitive attribute that everybody failed, and so if I run into somebody that is in that set, now I know I can attribute this sensitive category that they failed basic math English skills to them. And so again, not about "personal data," quote unquote, or "personally identifiable information," but it's still a privacy harm.
Debra Farber 50:09
So, I guess I'm gonna sum it up by going it's not just about...obviously, it's not just about compliance because, I mean, my whole show is shifting privacy left. Right? But, it also seems to be that you could get a lot of ethics into your organization if you just start threat modeling, just kind of adopting a threat modeling mindset. What could go wrong if I did this? Not just the happy path of what are we trying to build to, but what could go wrong if we build this?
R. Jason Cronk 50:37
So, one of the analogies I've been using...forgive me, I don't think we've said this in the last hour, but it's a pollution analogy.
Debra Farber 50:44
Yeah, go for it.
R. Jason Cronk 50:45
So back in the 1950s, '60s, and '70s, you know, pollution was a problem. Companies were dumping pollutants into the streams and ponds behind their manufacturing floor, and it was an externality. So, they were imposing a cost on society and individuals outside the company and they were not internalizing it, but they were getting the profits in return from the chemical processes that they were doing. And, then along came government and said, "Hey, no. If you're going to use arsenic, you have all this paperwork to fill out, and you have to handle it in a certain way, and you have to spend money doing compliance and documentation and prove...
Debra Farber 51:20
Insurance.
R. Jason Cronk 51:21
Yeah, insurance, and prove that you did all of the things; and, you know, companies were necessarily upset and like, "What is this? This is a compliance nightmare!" Okay, maybe you need to rebuild your processes. And, I think companies are still kind of struggling and maybe coming along. Maybe you need to rebuild your processes so you don't have this industrial pollutant that you're now tasked with being compliant with. So, instead of trying to, from a privacy perspective, instead of trying to paper over everything, and doing PIAs and DPIAs and data transfer assessments, and, you know, all this kind of stuff, and complaining about how much work it is and papering it is, rethink your business model and think about how you can achieve your goals without having all of this output that is causing potential problems and causing you problems from a paperwork perspective.
R. Jason Cronk 52:14
I mean, I shout about that all the time. I 100% agree. It's like stop the compliance paper chase, which is expensive! If you do data minimization or re-architecting or so many privacy enhancing technologies and new architectures that can allow you to prevent having to do some of this.
R. Jason Cronk 52:32
So, I have a very flippant statement that a lot of people will say, you can't have privacy without security. Right? Data minimization. If I don't have data, I don't need security.
Debra Farber 52:42
Yeah, at Visa. When I was there, I was working in Public Policy...
R. Jason Cronk 52:47
With VG?
Debra Farber 52:49
With EG?
R. Jason Cronk 52:49
VG. Vaibhav.
Debra Farber 52:52
Oh, yeah. Vaibhav. Yeah, back at that time, he was on the Security Awareness team, and I was on the Public Policy team, and one of the big ways that the Visa team would...Security team would talk about their security posture was how they were really focusing on "data devaluation." So, there was a huge Cybercrime department - I mean, not as big as like Microsoft's, but like, you know, there's a lot of data that Visa can see in terms of cyber criminals, cybercrime, and try to prevent theft of banks and stuff like that. And, I honestly found it fascinating, and with the EMV chip and some of the other innovations, you know, you tokenize data, so you're eliminating any ability to kind of have a fraud in the transaction when the card is present. All the fraud started to move online with the EMV chip because, well, I won't get into why, but the fact that that EMV chip was a token, and it was tokenized data, we would refer to as our "data devaluation strategy" because if you make the data worthless to the criminals, then they'll go somewhere else. They go where the money is. And so devaluing your data by tokenizing it, by anonymizing it (which is very hard to do) pseudonymizing it, you know, all these different...differential privacy. You know, you've got to pick the right tool from the toolbox for the right use case, but I think that more privacy folks should kind of think in terms of that. You know, how do we make this data so that nobody's going to, you know, want to steal it, copy it, or use it in any way other than for which it was intended?
R. Jason Cronk 54:27
Yeah, and, you know, at least my persecpective, like for 20 years with data breach legislation, there's been this over-focus on, you know, data security and privacy is about the security of personal data and, as we just discussed, there's all sorts of privacy issues that are not about personal data. There's all sorts of things you can do...so Solove's Taxonomy, which I use, now he has 16 harms. I've actually narrowed it to 15. I won't go into that whole discussion of why I think one of them is somewhere else. But one of the things I point out, especially when talking to security professionals, is only one of those 15 harms is in security, and that's where the security bucket falls into. But, there's 14 other harms that are not about the security of data if...you know, about the use of data, or about the sharing of data, or about these other things that don't involve data at all, or ancillaryly involve data. And so, we've got to start thinking about this much more broader than just the security of data.
Debra Farber 55:27
Well, Jason, thank you.
R. Jason Cronk 55:29
We can't go for another hour?
Debra Farber 55:30
No, not this time, but I am certain I'll be, you know, speaking with you again on this show, maybe when there's been some more crazy changes in the privacy by design space, or maybe to bring you back to kind of talk about some snafus in recent media, but you know, thank you so much for joining us today on Shifting Privacy Left to discuss what's new and exciting in privacy by design and default.
R. Jason Cronk 55:54
Thank you very much for having me. I believe we have a cocktail reception to head to after this.
Debra Farber 55:58
Yeah, you're also keeping me from that.
R. Jason Cronk 56:01
Now I know your motivation here of getting this over with.
Debra Farber 56:06
Until next Tuesday, everyone when we'll be back with engaging content and another great guest. Thanks for joining us this week on Shifting Privacy Left. Make sure to visit our website shiftingprivacyleft.com where you can subscribe to updates so you'll never miss a show. While you're at it, if you've found this episode valuable, go ahead and share it with a friend. And, if you're an engineer who cares passionately about privacy, check out Privado: the developer-friendly privacy platform and sponsor of this show. To learn more, go to Privado.ai. Be sure to tune in next Tuesday for a new episode. Bye for now.