This week's guest is Kim Wuyts, Senior Postdoctoral Researcher at the DistriNet Research Group at the Department of Computer Science at KU Leuven. Kim is one of the leading minds behind the development and extension of LINDDUN, a privacy threat modeling framework that mitigates privacy threats in software systems.
In this conversation, we discuss threat modeling based on the Threat Modeling Manifesto Kim co-authored; the benefits to using the LINDDUN privacy threat model framework; and how to bridge the gap between privacy-enhancing technologies (PETs) in academia and the commercial world.
Copyright © 2022 - 2023 Principled LLC. All rights reserved.
Debra Farber 0:00
Hello, I am Debra J. Farber. Welcome to The Shifting Privacy Left Podcast, where we talk about embedding privacy by design and default into the engineering function to prevent privacy harms to humans and to prevent dystopia. Each week, we'll bring you unique discussions with global privacy technologists and innovators working at the bleeding edge of privacy research and emerging technologies, standards, business models and ecosystems.
Debra Farber 0:27
Today, I'm delighted to welcome my next guest, Kim Wuyts, Senior Postdoctoral Researcher at the IMEC-DistriNet Research Group at the Department of Computer Science at KU Leuven. She has over 15 years of experience in privacy and security engineering and is one of the driving forces behind the development and extension of LINDDUN, a privacy threat modeling framework that provides systematic support to elicit and mitigate privacy threats in software systems.
Debra Farber 1:00
A very warm welcome to you, Kim.
Kim Wuyts 1:02
Thank you, I'm really excited to be here.
Debra Farber 1:04
I'm really excited for you to be here, too. I've been wanting to have you on the show for a long time, since you're really right in the center of all things threat modeling for privacy engineering. So, I've been watching you for a long time, and I know that you recently had met up at the RSA Security Conference and had lunch. And so, you know, this is a long time coming. And I can't wait to ask you some meaty questions that I think our listeners are going to be really interested in. To kick it off, would you mind just giving us an overview of your career journey, and how did you get interested in threat modeling in the first place?
Kim Wuyts 1:40
Well, I started as a junior researcher doing some research projects, and one of the assignments was do a security analysis. So, we looked around and there's this really cool approach called STRIDE, which was created more than 20 years ago at Microsoft as part of their security development lifecycle, and that's a security threat modeling approach. And, we were applying that and our privacy colleagues were actually saying, "Wow, that's really interesting because we don't have something similar for privacy. We also would like to have something systematic and something that has some additional information and additional knowledge." So, we kind of joined forces and created a threat modeling approach for privacy; and, I've been working in that area and on our LINDDUN Privacy Threat Modeling approach ever since basically. And actually, in a couple of weeks, I will switch over from academia and, after some well deserved rest, move into some industry position soon. So another exciting step in the career. Yes.
Debra Farber 2:43
That's really amazing. That's exciting. I might be asking you a little bit about that later. I'm not sure how much you will be able to reveal. Can you tell us who you're going to be moving to work with?
Kim Wuyts 2:52
Well, I'm still checking my options. I'm trying to take some time and basically also just enjoy some relaxed time, some time off after my academic career and diving into some options there. So well, if anybody is looking for a privacy threat modeling expert, feel free to reach out, I would say.
Debra Farber 3:12
Well, if anybody is looking for a privacy threat modeling expert, it seems clear to me that you would be like at the top of the list to interview for sure. So that's really exciting. I'm glad you put that call out there. And I'm really also glad to hear you're going to take some time off because you've definitely been working on this stuff for a long time. And, you know, I think taking downtime is essential, so good for you. And it's the summer, so that's lovely.
Kim Wuyts 3:38
Yes, yes, absolutely.
Debra Farber 3:40
So, I know you co-authored awhile ago, a Threat Modeling Manifesto with several other threat modeling researchers. And so I'd love to discuss what is threat modeling generally? When we say this term, what do we mean by "threat modeling?" Who should be threat modeling? And then, what's your philosophy on threat modeling?
Kim Wuyts 4:01
So the manifesto definition - and it took us like weeks and weeks of discussions and tweaks to get 15 people to agree on a definition - is analyzing representations of a system to highlight concerns about security and privacy characteristics. In other words, basically, threat modeling means you start thinking about security and privacy early on and start thinking about all the things that can go wrong in your system so you can fix it before it actually happens. So, it's really embracing that "by design" paradigm for security and privacy really bringing it in early in the development lifecycle.
Debra Farber 4:40
Love it. And then, since it's a threat modeling manifesto, is there a particular philosophy on threat modeling?
Kim Wuyts 4:47
Well, so threat modeling has...you have different approaches, you have different techniques you can apply, so the essence really boils down to the four questions that I mentioned before. It's basically a very simple idea that, well we do that in our everyday lives. When you leave the house, you think, "Well, I don't want a burglar to come in." So I think about all the things that can go wrong. Well, they can enter through the door through the window. So let's make sure I lock everything right up. So, that concept of asking ourselves these questions unintentionally, often in our day-to-day activities, and the idea of threat modeling and of this threat modeling manifesto is to make people aware that this is really a fantastic way to do so for security and privacy too - to really embed that in the design, have that reflection on thinking about what can go wrong because that can help you throughout the development lifecycle that can introduce new requirements that will increase your security and privacy posture.
Debra Farber 5:48
And I love it, of course, because it definitely meets that "shift left" trend and need to address privacy and security earlier in the data lifecycle and engineering lifecycle. So how do you see those things working together - threat modeling and "shifting left?"
Kim Wuyts 6:05
Well, I've been discussing this with with some of my Threat Modeling Manifesto colleagues as well. I would say that threat modeling can be considered kind of a vehicle for "by design" because threat modeling gives you that that technique, that approach to really do this in a systematic and structured way. And the outcome, then again, guides you in the subsequent steps of the development lifecycle. It gives you requirements to be implemented in architecture; it gives you input for pentesters to use because you define what can go wrong, but you want to avoid. So pentesters can use that as input to see whether you actually fix that. So, I think it's a great approach that will help you, that will guide you through that "by design" paradigm.
Debra Farber 6:49
So, would you say that the people who would be responsible for this framework within an organization would be developers themselves? Would it be somebody who's more on enterprise risk? You say, pen testers, so that to me screams, typically security. So, who within an organization would be the one that you would...or who are the people - maybe it's groups of people - that should own the process of privacy threat modeling?
Kim Wuyts 7:15
Well, developers are definitely a great target audience. They're architects, analysts. I'm gonna quote one of the Manifesto patterns, which says, you need a varied viewpoint, basically. You should try to get different perspectives around the table. You need to have the developer, the project lead, the privacy expert, the security champion because...well, first of all, you need to have a good understanding of the system. And typically, those people all have kind of different perspectives on that, and getting them together around a table will give you that joint view on the system. And that, typically already brings up some interesting discussions like, "I didn't know, it was done like that." "Well, we said so in the documentation. "Yeah, but we didn't implement it like that." So, getting that conversation started is really important there. But of course, you need a facilitator, which is typically then either the privacy experts or threat modeling expert that will guide the discussion, that will try to get that system model out of that group of people and trigger conversation... trigger brainstorming about what can actually go wrong from a privacy or from a security perspective.
Debra Farber 8:32
That makes sense. Thank you so much for that. I know, under this threat modeling manifesto, before we dive deeper into the privacy threat model framework of LINDDUN, I just wanted to ask you about basic patterns that benefit threat modeling versus ones that inhibit threat modeling - like best practices versus, you know, things to avoid.
Kim Wuyts 8:53
Yeah, well, I can list a couple because we actually have a list of patterns and anti-patterns in the Manifesto. The Manifesto is just like a couple of minutes read. I really highly recommend it. I know I am biased because I'm one of the authors, but I think it has so much great content there. So, a couple of patterns that we described, like the things you should try to look for when you do threat modeling is have a systematic approach because if you apply threat modeling in a systematic way, it becomes reproducible. And I think for privacy by design, that's great, because that gives you that kind of accountability that some data protection legislation likes to have like proving that you actually did this by design because he used a systematic approach. I think that's really valuable for privacy.
Kim Wuyts 9:39
I mentioned the various view points; get people with different perspectives around the table. While in any use successful field-tested techniques. There's several approaches to privacy threat modeling out there to security threat modeling out there. But, the thing is, there is no one-size-fits-all. So, try what works for you specifically, what works for your type of development practice, for your kind of organization for your specific context or domain. Maybe you benefit more from a quick brainstorm, or maybe you're in a highly-regulated domain and you need a very systematic, very formal, very thorough analysis. And then, you have all kinds of approaches in between there.
Kim Wuyts 10:25
And then, talking about a couple of anti-patterns. For instance, people work hard on the "what are we working on?" so trying to come up with the best model to represent a system. But there is this quote by George Box that says "All models are wrong. Some are useful." Basically, there is no one perfect representation. Start with what you have, extend with what you need. When you are talking about privacy or security, there is no single ideal view. And basically, the more information you have, the more additional representations you have, the better you can start looking at the problems...at the privacy problems or the security problems.
Kim Wuyts 11:07
And maybe another misconception or anti-pattern is the "hero threat modeler." So there's this this idea that, "Well, this sounds like a very difficult thing to do, I will bring in a privacy expert, and that's the only person who can ever do that because, well, they're the expert." But as I mentioned earlier, who should do threat modeling? Well, everybody. So it's not that you need to have this crazy ability, unique mindset. Everybody can and should do that. The goal is to not have that one privacy champion, one privacy expert fixing everything for the entire company, but by creating awareness by including people in those threat modeling sessions, that people will get to grow in this and embrace this and have that reflection, have that awareness of thinking about privacy even when they're not in a designated threat modeling session. They can maybe do that later themselves for their own teams.
Debra Farber 12:05
That's great. I agree with all of that. Right? I mean, there's only so much that a singular expert can do. A lot of what we need to do as experts is transfer our knowledge to others to enable them to take, you know, action as part of their own job roles - like in this case, you know, software developers. So huge fan of that and totally agree that on that approach.
Debra Farber 12:25
I want to turn our attention to the LINDDUN Framework, this threat modeling framework for identifying privacy threats in software systems early in the development lifecycle. Now you've been working on that for years. You're part of the team there. Can you start out with letting us know what the benefits to using LINDDUN are?
Kim Wuyts 12:43
Yes, sure. I've been talking about threat modeling now for a while. So threat modeling means to think early about all the things that can go wrong. But thinking about okay, what can go wrong from a privacy perspective, that's kind of easier said than done, right? You already need to be like a diehard privacy expert to know all the things that can go wrong. So that's where I believe approaches such as LINDDUN have great value, because LINDDUN is actually an acronym, mnemonic for the different 'privacy threat' categories that need to be tackled. So, LINDDUN, is a mnemonic for seven categories, which are: Linking, Identifying, Non-Repudiation, Detecting Data Disclosure, Unawareness, and Non-Compliance. And, for each of those seven categories, LINDDUN provides additional guidance in the form "threat trees" or "threat cards" that will help you understand what this actually means, and that will guide your analysis, because you can really go through each of those threats types, threat notes to trigger your inspiration and to think about "Okay, does this make sense for my system? Where does this apply? If so, well, I found the threats I need to document and then need to resolve this."
Debra Farber 14:01
I love that. And, I was actually going to ask you where LINDDUN the term comes from, and I can't even believe I didn't see that. I have the privacy threat types, all seven of them listed out, and it's an acronym of those privacy threat types and I just didn't even realize that before you called it out. So, thanks for that. But do you mind going through each of them and just giving a high-level description of what these privacy threat types are? What does it mean? First one being 'linking' for instance?
Kim Wuyts 14:29
Yes. So 'linking' means you can link different bits and pieces of information together. And so, it's about personal information. And obviously, the more of those data items you link together, the more information you can deduce because well, you start building a profile about individuals. And often, this kind of linking leads to the next category, which is 'identifying,' which is actually deducing who that person is. If you don't need to know somebody's social security number or DNA sequence or full name, address and date of birth to know who that person is. If you would tell somebody in the privacy community maybe that you have been talking to that blonde woman who talks about LINDDUN, these are three different items...data items properties that on themselves don't reveal much. I mean, 'blonde and woman' doesn't say much, but by combining those bits and pieces together, you quickly grow a more specific profile; you quickly zoom-in to a closer anonymity set. And the more you add, the more narrow you can single out a person. And singling out can lead to identifying the natural person and that's often something you want to avoid from a privacy perspective.
Kim Wuyts 15:47
Then you have 'non-repudiation,' which (especially for the security people) it's kind of weird because for security, non-repudiation is a property you want to achieve. From a privacy perspective, you often want to have plausible deniability. Like I want to deny who you voted for want to deny that you access to certain type of system or whatever. So, that's something that also ties back to hiding at a claim - something that you did - and hiding attribution link to your identity that ties back to that claim.
Kim Wuyts 16:22
Then we have 'detecting.' Detecting is kind of related to side channel attacks, too. Detecting means that even if you don't have access to data, to the system specifically, to information in a flow, you can still deduce some interesting information. An example I like here is when you want to know whether your friend has an account on some sketchy website, you often don't need to hack into the database of user accounts, you can just go to the 'forgot password' feature and type in their email address and then when it replies, "Well, okay, thank you, we found your email address in the system, you will now receive an email to reset your password," you can just detect that that email address is part of the system without actually having access to the database of user accounts.
Kim Wuyts 17:16
Then we have 'data disclosure,' which ties back to the minimality minimization principle, which I think is one of the core principles for privacy. So, don't collect more, don't store more than, don't process more than you actually need. Don't collect more specific information - like don't go to fine grains if you don't need it. Don't collect it or don't store it for too long of a time. That's all data disclosure threats.
Kim Wuyts 17:44
Then we have 'unawareness' and 'unintervenability,' which basically relates to transparency and control. So, as data subjects - an individual - you have the right of knowing what information is being collected about you, why it's being collected, who else has access to it. And often, that's forgotten. And in the same way as an individual, you also have the right to have access to your information, to be able to decide who gets access to your information. So that's all in this category.
Kim Wuyts 18:16
And then the final one is "non-compliance" and we - in the latest version - stretch this out a bit. It's not just about data protection compliance. It's really about complying to best practices of related, relevant domains because privacy...you can do all these things and de-identify and minimize and all those things; we still rely, for instance, on security and confidentiality. So, this final category tells you that you also need to look at security best practices. You also need to get legal compliance; you need to get a lawyer involved because legal basis, that's something that that you cannot check from a technical perspective. And because privacy is about data,about personal data, data lifecycle management is also something that needs to be tackled properly. So, that's the the seven categories in a nutshell. But basically, you can check everything on the website; everything is freely available on the the website, which is linddun.org.
Debra Farber 19:16
Yes, that's an excellent point. All these materials are free and accessible, and I encourage everybody to go take a look at them. So, I want to get even...go down a few more layers to kind of demonstrate how software engineers can leverage this framework. And, I understand that LINDDUN has a concept of 'privacy threat trees,' which aim to refine each threat type into more concrete threat characteristics. Can you explain how privacy threat trees help the threat model?
Kim Wuyts 19:48
Yes. So, a threat tree basically lists, like you say, the threat characteristics like the properties that will result in a privacy threat. And, by going over all those characteristics in detail, you get a more fine-grained analysis. You get that that additional information because if you're not an expert, well, I can say, "Well, now think for 10 minutes about linking," then probably, many of you would still be like, "Okay, linking so but what do you mean?" So, this is where the threat trees - or similarly we also have 'threat cards' for the LINDDUN Go approach that contains basically the same knowledge, but in a different representation of information - that will help you to facilitate that discussion to facilitate to guide that analysis.
Kim Wuyts 20:36
So for instance, for linking, it tells you that you should look at specifically linked data that can be linked together because all share a unique identifier, which can be a social security number or a user name or something; but, you can also link data - it becomes "linkable data" instead of "linked data" when you can start combining things together through quasi-identifiers. So, what I mentioned before: the blonde woman talking about LINDDUN for instance, these different data items on their own don't really say much, but when you combine those, then it becomes a quasi-identifier; then it kind of ties back to me. And, you can link individuals, but you can also link groups of people, like make profiles about all people who live in a certain neighborhood because maybe the insurance company wants to know that they have a higher risk of certain disease and therefore increase their insurance fee. So, linking is about tying those bits and pieces of information together and using additional information profiling people inferring information. And the threat trees will help you to reason about this because, in addition to explaining those different characteristics, the trees and the LINDDUN Go threat cards also contains some examples to help you see what is an example of the direct linking threat.
Debra Farber 22:07
Yeah, that would be great to see an example like if we unpack a little more to demonstrate how the model can assist software engineers and use the particular privacy threat type of "linking," would you be able to explain what some of the concrete threat characteristics, examples, criteria, impact info would be included?
Kim Wuyts 22:29
Yes. So, one of the characteristics in linking says you can link because you have a unique identifier. That can be, for instance, as I'm now reading from the threat tree of linking, it can be an email addresses that use as an ID or an IP address that's also a unique identifier. It can be that you're locked into a certain website, so a specific user ID can also be a unique identifier. When we then move to another characteristic for linking about linkable data through combinations, through quasi-identifiers, you can, for instance, have a combination of age, gender and postal code because those three pieces of information are known to be, I think it was like 90% specific and revealing a unique person. Browser fingerprint is another one. You can assume that you're browsing anonymously, but because of all the information that you share with the back end, the type of browser you use, the fonts you have installed, the font size you are looking at, the languages that are installed on your computer on your browser, those are all, when combined, so specific that you are actually unique and that different visits to that same website can be linked to your profile because it can be tied to your specific browser fingerprint. And then we have different examples and criteria to look for for the different characteristics as well.
Debra Farber 24:01
That's super helpful. And thank you for going through that because as I was looking through LINDDUN, I thought, "Wow, you know, I really want to make this conversation clearly demonstrate that there are technical considerations for developers" - specifically like looking at what data might be flowing through the organization or, you know, being SDKs and such. And so, you know, I think that was a really helpful elucidation there. And impressively LINDDUN is not only a framework - I mean, that exists - but the team at KU Leuvon also created a repeatable process for systemising threat modeling into the system lifecycle with what they call 'LINDDUN Pro.' Can you tell us a little bit about LINDDUN Pro's methodology and how companies can leverage LINDDUN to meet privacy and data protection by design and privacy engineering goals?
Kim Wuyts 24:52
Yes, of course. Yeah. So, the idea of now our kind of free branded LINDDUN methodologies - we now have LINDDUN Pro and then LINDDUN Go and we're working on a third one, which will LINDDUN Maestro....LINDDUN Pro is actually closely-related for the security people to try it for interaction, which means that it's sort of very systematic, thorough approach where you typically start from a data flow diagram, which is kind of a simple representation of how information flows through the system. So, you have only a couple of building blocks; external entities, which are users or external, third parties; processes; databases; and flows in between. And what you do for the LINDDUN Pro, for the thorough analysis, is you make this big mapping table where you list all those different interactions, those different elements and components and then you map them to the LINDDUN categories, and you go over each one, one-by-one, in a very systematic way. So, that mapping table kind of becomes your guide and also becomes your accountability tool because it will show auditors or whoever, that you come through this very thoroughly, very systematically.
Kim Wuyts 26:05
And then, for each check-in in your mapping table, you either identify threats based on the threat trees or you create, you write down some assumptions saying, "Well, this doesn't apply because I make this specific assumption." On the other side of the spectrum - because well, I mentioned before you have different needs different requirements from different organizations, different teams - if you don't want that very thorough approach, you can go to a more brainstorm, more facilitated brainstorm approach with the LINDDUN Go methodology where you are more looking at well, as I said, more facilitated brainstorming; it can be even a gamified approach where instead of a list of threat trees, you use a deck of threat cards that will guide you through this elicitation, this analysis phase in kind of a similar but less thoroughly-documented way, so that the cards are the driver for your analysis, but you don't need to document this big mapping table. It goes a bit faster, but of course, then you lose a bit of that accountability because you don't write everything down. But that's useful then for the people who will look more for an easier way to get started with privacy threat modeling - a quicker way, a more lean way. So, you can kind of see which of those methods, those different approaches fit your specific needs best.
Debra Farber 27:34
That makes a lot of sense because I could see how that's a teaching tool, because the cards are kind of almost like a game. Right? Yeah. And so it's engaging. Would you say that it's almost that would be similar to doing like, tabletop exercises, but, you know, in keeping teams ready by playing this threat modeling card game that can help them, you know, apply the right...Yeah, you think that's kind of akin.
Kim Wuyts 27:56
Yeah. Well, well, LINDDUN Go was inspired by the Elevation of Privilege card game, which is, again, based on STRIDE, created by Adam Shostack. And, it's a gamified version. So, that kind of triggers us like, "Okay, we want something more lean, something more to support that brainstorming approach." So yeah, you can definitely use it as a tabletop game, but it doesn't have to be a game. You can also just use it as a as a conversation or analysis facilitator without actually adding gamified angles, because we also learned from from people using it that they say, "Well, this is great, but we don't like to do this as a game version. We just like to use the inputs. And, we're serious people; we don't do gamified stuff we do.
Debra Farber 27:56
Well, that's good to know. That's really good to know. Okay, don't use it as a game. It's a teaching exercise.
Kim Wuyts 28:52
Oh, yeah, you can you can use it as a game or just as a teaching or facilitation resource.
Debra Farber 28:58
But that is still compelling, and I have a copy, and it is very useful. So, I highly recommend the LINDDUN model. How would you say that organizations can combine threat modeling approaches with privacy enhancing technologies to address privacy risks? Is there an overlap there?
Kim Wuyts 29:19
Yeah. So, with LINDDUN, we mainly focused on that second question I mentioned for threat modeling - that's "What can go wrong?" But of course, you want to look at "How can we fix this stuff that we found?" So, we have done some work there. I'm not sure whether it's still on the latest version of the website, but we did some mapping from LINDDUN categories to privacy enhancing technologies. But basically, I would recommend have a look at privacypatterns.eu or privacypatterns.org or at The Little Blue Book written by Jaap-Henk Hoepman, a Dutch professor who wrote about privacy strategies, privacy tactics. And, I think those - like looking at more high-level strategies, tactics, patterns that will help you look for the right direction on how to fix threats, and then you can dive into sets of threats and determine which, you know, de-identification technique is best suited for this specific threat.
Kim Wuyts 30:21
As far as I know, there is still not a very comprehensive overview of like all existing PETs out there. That would be really awesome. I know that there's some organizations like ICO that created kind of a catalog of PETs. I really hope that we will dive into more like a community work of selecting PETs because there's PETs scattered all over, but it's really hard to find a good overview of what is out there. And, I think that's really helpful, that should be really helpful for everybody who was looking into actually fixing privacy issues and looking for the right PETs.
Debra Farber 31:01
Yeah. No, I agree. Hey, if you want to co write that book with me, I'd be glad to work on that topic because I think there's a need. No need to answer right here on air. Okay, on to the next question. You are the Program Chair for the International Workshop on Privacy Engineering. And, I know, I'd plug it, but I know it's taking place a week and a half from the day of this recording. By the time most people listen to this, it will have already taken place. But, I did want to ask you a little more about the conference, so people know about it for next year, and then, what topics this year are you most excited to hear about?
Kim Wuyts 31:38
Yes. So IWPE, or the International Workshop on Privacy Engineering is a workshop that is organized at EuroS&P, which is the European edition of Security and Privacy, an academic conference. So, this is also mainly an academic workshop where we have lots of academics send in their publications and talk about it. We, the last couple of years, really try to also get some industry people coming in to talk about like, "How do we do privacy in our company? How do we do applied privacy engineering?" because I think that's that's really interesting and important to also bring that to the academic community and at the same time, share that academic knowledge to the broader industry. Well, let me just quickly go through the program. What we have this year is a session on Privacy Enhancing Technologies, which is always really interesting because you always learn new approaches and techniques. Well, it's The Privacy Engineering Workshop, so we have kind of an overall privacy engineering session there with also a keynote I'm really looking forward to by R. Jason Cronk, who will talk about "Is 2023 the year of the privacy engineer?"
Debra Farber 32:51
Ooh, that's exciting.
Kim Wuyts 32:52
Yes. Right? We have a session on Privacy & Society, a session on Privacy Labels and Policies, and a session on Privacy Threats and Threat Modeling. So, I'm really excited about that session, of course, but you know, I'm biased because of my background and my interest. So, I'm really looking forward to that workshop. And well, I hope to see many of you this year or next year.
Debra Farber 33:16
That's awesome. Will there be a publicly-accessible links to the talks afterwards that people can tune into?
Kim Wuyts 33:22
I'm not sure whether they will be recorded. Typically, we do share the slide decks and, well, it's an academic conference, so the papers will also be, I think, publicly-available. But, that kind of depends on the way that the authors have registered in the system.
Debra Farber 33:39
Okay, got it. All right, well, I'll put a link to the conference page and people can, you know, see what they have access to.
Kim Wuyts 33:46
Debra Farber 33:47
As we're closing up here, I just want some of your thoughts on a few things. So, are there any topics on your radar within the privacy engineering world over the next six months? Like, there's some interesting blends of tech here where we've got you know, everyone's talking about AI. Everyone's talking about, I don't know if they're talking about "metaverse" anymore, but there's all these other technological stacks and, you know, privacy engineering and cloud - like are there any specific areas that stand out for you that you're excited or interested in learning more about?
Kim Wuyts 34:18
Yeah, well, I think AI is like the big one that that will need more attention, not just in the next six months, but way after that, too. I think Responsible AI is is a really hot topic and for good reason. Because, yeah, there's so much more to be done there. I know within the threat modeling community, a friend of mine, Isabel Barbera, has also created a threat modeling knowledge base to analyze or to reason about Responsible AI actually inspired by our LINDDUN Go card deck.
Debra Farber 34:48
Kim Wuyts 34:49
So, even within the threat modeling space, there is some work on that Responsible AI. But yeah, I think that's one of the big domains that we need to dive into from a privacy perspective, ethics, trust, responsibility. I think definitely a hot topic to follow. Yes.
Debra Farber 35:05
Yeah. Let's see, what other what books, communities, conferences, or other resources would you recommend for listeners to learn more about privacy threat modeling, privacy engineering, or PETs?
Kim Wuyts 35:19
Yes. So, well, my two favorite privacy conferences are probably the one by IAPP - well, the...I only attend the one in Brussels, but I hear great things about the other ones across the globe as well. Every year in Brussels, we also have the CPDP, The Conference on Privacy and Data Protection, which is also a very international conference with lots of great panels and really a lot of big names that come talk there and that are a great networking opportunities. So yeah, communities, I mentioned IAPP before. Well, there's different people scattered around, and honestly, I get a lot of my privacy input from some people I follow on LinkedIn, like Katharina Koerner, who has done some great things for the privacy engineering community at IAPP, Tim Clements, who shares so much interesting stuff on data protection. Yeah, there's so many other people I love to follow for threat modeling, specifically, there is the Threat Modeling Connect community, which is a really open community where people can ask questions where you get kind of support if you're stuck, where there's lots of seminars to grow in threat modeling. So, I think that's also a great resource to have a look at.
Debra Farber 36:40
Excellent. And is that a Slack group or is there a website for the...
Kim Wuyts 36:44
Yeah, it's a website. I think it's just threatmodelingconnect.org or .com. I have to look. Yeah, it's by a bunch of threat modeling experts and you just have to sign up to you get access to lots of message boards and seminars and all those kinds of things. Actually, a couple of months ago, they even did a threat modeling hackathon, not just about security threat modeling, but also about privacy threat modeling. So, privacy is really been part of that community, too.
Debra Farber 37:14
Excellent. And, I meant to ask you this before, but what's the adoption like for the LINDDUN model? Is it like slow for organizations to start incorporating it, or are you seeing a real big uptick in using LINDDUN across organizations?
Kim Wuyts 37:29
Yeah, so especially since GDPR entered into force, we got a lot more people reaching out like, "Oh, can you help us? How can we use LINDDUN? How can we collaborate?" So, ever since since, well, data protection legislation kind of forced people to start thinking about privacy, we've seen a big uptake. And well, it's both local and international, from startups to very big tech companies, global tech companies, who I am aware that are using LINDDUN, or LINDDUN Go. And well, the feedback I got is like, even from a big company, they say, "Well, even we have this budget to kind of create our own stuff, but it's so great that this is such a well-established technique now that it kind of would be silly if we do not use all this knowledge, all these resources and just integrate them because they're so useful. And, we can still tweak them to our own needs to make it fit. So, I'm really excited to see...also there is now a couple of ISOs. ISO on Privacy Engineering, I think, or was it ISO and Privacy-by-Design, that now explicitly mentioned the need for privacy threat modeling. The ISO on De-identification techniques also mentioned privacy threat modeling as an approach to be included. So, I really see an uptake of privacy threat modeling within the privacy engineering community, and I think LINDDUN is one of the bigger names there to be applied and included there.
Debra Farber 39:04
Absolutely. I mean, it's the one that I've definitely heard the most about in my research. I guess my last question for you, before we close is, how can we better bridge the gap between the academic and commercial world when it comes to advancing and deploying PETs and maybe the LINDDUN model as well? I mean, you know, they kind of go hand-in-hand because you might deploy the right PET based on threat modeling. I guess that's the question. I'm curious about your thoughts.
Kim Wuyts 39:32
We need to bridge that gap. Right? And, it's not that straightforward because, well, yes, some academics kind of want to stay in their ivory tower. And then, some industry people are saying, like, "I don't really care about what's going on in academia. I want something that I can use right now." So, I think academics will need to look, not just at foundational research, but also looking at 'ready to market' research and really reaching over and handing some things that are usable. I think we are working on things like that because, for instance, we have the LINDDUN Go approach that makes it more lean. We are working on improvements there for the LINDDUN threat trees as well to make them more understandable, more usable, so the industry can be open to academic resources.
Kim Wuyts 40:26
I forgot to mention another conference that I only attended remotely, but still I really liked. It's the PEPR conference. Well, we try to do that for our IWP as well, but I think PEPR is one of the great examples where academics and industry people from the privacy engineering field really come together and have side-by-side talks about "Well, this is some new exciting research we have been doing." And then, there is a somebody from a big company saying, "Well, this is how we actually do this in practice." And, I think that's wonderful to see that mix of the two worlds coming together and learning from each other and the academics taking something back. Like, for instance, "Well, okay, this isn't working in practice. So, maybe we should go back to the drawing board and start re-working our foundational research." And then industry people saying, "Well, okay, this seems like an interesting concept. Maybe we can work further with it and maybe have the researchers join us and work on more of the foundational research, and then we apply it in practice and see how that works. So, I really think we need to look for ways to to bring those together in projects in company projects, in research projects, and definitely at conferences.
Debra Farber 41:43
Thank you so much for that I will put a link to the PEPR conference in the show notes. And PEPR stands for Privacy Engineering, Practice and Respect for those of you who are unfamiliar. And, it is a great conference. Any last words before we close for today?
Kim Wuyts 41:58
Oh, well, just an invite for everybody to have a look at threat modeling in general, The Threat Modeling Manifesto, and basically just starts including privacy and security early on, by design. And basically, the name of this podcast: let's shift left and start working on those concepts early.
Debra Farber 42:18
Excellent. All right. Well, Kim, thank you so much for joining us today on Shifting Privacy Left to discuss privacy threat modeling and privacy engineering.
Kim Wuyts 42:27
Thank you for having me.
Debra Farber 42:29
Sure. Of course, we'll probably have you back in the future to see where you've landed and what you're working on then.
Kim Wuyts 42:35
All right. I'll keep you to that.
Debra Farber 42:39
Until next Tuesday, everyone, when we'll be back with engaging content and another great guest.
Debra Farber 42:46
Thanks for joining us this week on Shifting Privacy Left. Make sure to visit our website shiftingprivacyleft.com where you can subscribe to updates so you'll never miss a show. While you're at it,if you found this episode valuable, go ahead and share it with a friend. And, if you're an engineer who cares passionately about privacy, check out Privado: the developer-friendly privacy platform and sponsor of this show. To learn more, go to Privado.ai. Be sure to tune in next Tuesday for a new episode. Bye for now.