The Shifting Privacy Left Podcast

S2E27: "Automated Privacy Decisions: Usability vs. Lawfulness" with Simone Fischer-Hübner & Victor Morel

September 12, 2023 Debra J Farber / Simone Fischer-Hübner & Victor Morel Season 2 Episode 27
The Shifting Privacy Left Podcast
S2E27: "Automated Privacy Decisions: Usability vs. Lawfulness" with Simone Fischer-Hübner & Victor Morel
Show Notes Transcript Chapter Markers

Today, I welcome Victor Morel, PhD and Simone Fischer-Hübner, PhD to discuss their recent paper, "Automating Privacy Decisions – where to draw the line?" and their proposed classification scheme. We dive into the complexity of automating privacy decisions and emphasize the importance of maintaining both compliance and usability (e.g., via user control and informed consent). Simone is a Professor of Computer Science at Karlstad University with over 30 years of privacy & security research experience. Victor is a post-doc researcher at Chalmers University's Security & Privacy Lab, focusing on privacy, data protection, and technology ethics.

Together, they share their privacy decision-making classification scheme and research across two dimensions: (1) the type of privacy decisions: privacy permissions, privacy preference settings, consent to processing, or rejection to processing; and (2) the level of decision automation: manual, semi-automated, or fully-automated. Each type of privacy decision plays a critical role in users' ability to control the disclosure and processing of their personal data. They emphasize the significance of tailored recommendations to help users make informed decisions and discuss the potential of on-the-fly privacy decisions. We wrap up with organizations' approaches to achieving usable and transparent privacy across various technologies, including web, mobile, and IoT. 


Topics Covered:

  • Why Simone & Victor focused their research on automating privacy decisions 
  • How GDPR & ePrivacy have shaped requirements for privacy automation tools
  • The 'types' privacy decisions & associated 'levels of automation': privacy permissions, privacy preference settings, consent to processing, & rejection to processing
  • The 'levels of automation' for each privacy decision type: manual, semi-automated & fully-automated; and the pros / cons of automating each privacy decision type
  • Preferences & concerns regarding IoT Trigger Action Platforms
  • Why the only privacy decisions that you should 'fully automate' are the rejection of processing: i.e., revoking consent or opting out
  • Best practices for achieving informed control
  • Automation challenges across web, mobile, & IoT
  • Mozilla's automated cookie banner management & why it's problematic (i.e., unlawful)

Resources Mentioned:



Privado.ai
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.

Shifting Privacy Left Media
Where privacy engineers gather, share, & learn

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Copyright © 2022 - 2024 Principled LLC. All rights reserved.

Victor Morel:

The idea would be to build these privacy profiles according to a longitudinal study so that you will have your predefined profile - like your privacy pragmatic or privacy guardian, for instance - and according to that, you will have predefined choices about your privacy decisions. So, it's always about empowering people, about supporting their decision by providing better and informed notices, for instance, and then we will combine it with another kind of automation, with on- the- fly privacy permission so that people are not burdened too much with the decisions, but only when needed, when required. We're trying to like design it to solve this tension, basically, between usability and lawfulness in this kind of environment.

Debra J Farber:

Welcome everyone to Shifting Privacy Left. I'm your host and resident privacy guru, Debra J Farber. Today I'm delighted to welcome my next two guests, Victor Morel and Simone Fischer-Hübner, who will be discussing their recent paper, "Automating Privacy Decisions where to Draw the Line," where they outline the main challenges raised by the automation of privacy decisions and provide a classification scheme that addresses those challenges. Simone Fischer-Hübner has been a full Professor at Karlstad University since June 2000, where she is the head of the Privacy and Security ( PRISEC) Research Group. She received her undergraduate degree, PhD, and Habilitation degree in Computer Science from Hamburg University. Impressively, Chalmers University of Technology awarded Simone with an honorary doctorate two years ago, where she is now a visiting professor.

Debra J Farber:

Simone has been conducting research in privacy, cybersecurity, and privacy- enhancing technologies for more than 30 years. I can't list all of her many accomplishments and projects right now, but I will highlight a few recent ones. She's the Swedish representative of the International Federation for Information Processing, a Board Member of the Swedish Data Protection Forum, and Member of the Board for the Privacy Enhancing Technology Symposia (otherwise known as 'PETS'). Victor Morel holds a PhD in Computer Science from INSA de Leon in RIA on the Privatics Research Team. His research interests include privacy and data protection, network security, usability and human-computer interactions, applied cryptography, and technology ethics. Victor is currently working in the Security and Privacy Lab at Chalmers University of Technology on usable privacy for IoT applications; and, in addition to his academic activities, he also volunteers his time to educate others by advocating for decentralization, privacy, and free software. Welcome, Victor and Simone. Thank you.

Victor Morel:

Thanks, Debra, for inviting us.

Debra J Farber:

Delighted to have you here. So what motivated you both to focus your research on the automation of privacy decisions?

Simone Fischer-Hübner:

So, our ultimate goal is to design privacy decisions that are usable, where users are well informed and do decisions that also match their preferences. However, in practice, this is always a challenge because users are overwhelmed with lots of privacy decisions, consent requests, cookies. So, they barely have time to read through privacy policies and make well- informed decisions. So, the question is whether privacy decisions can be supported through automation. For instance, that systems can be supported by machine learning, predict the users' privacy decisions, and make recommendations or help users to set these decisions. However, there are many legal questions as we probably later will discuss, and further challenges. So, maybe, Victor, you want to compliment?

Victor Morel:

Yeah, it all boils down to this cyber security research project that we're conducting together with other people in Sweden, and we wanted to build a privacy system that will manage these privacy permissions. But, we were not sure about how we could automate these privacy decisions, and so we started to like dig a little bit and ask some questions to lawyers - people working in data protection agencies - and eventually we realized that no one really knew. And so, all of our findings could actually result in a research paper. So, this is also why we have this article now, because we had to provide the answers ourselves, in a way.

Debra J Farber:

That makes sense. So, it's the initial research for the next step of you building something that you wanted to build in a privacy- preserving way, for this privacy assistant capability. It also brings up that there is a regulatory perspective here. What did you learn as your requirements for building this out when you were speaking to the attorneys? If you could summarize the relevant European legal rules, like GDPR and ePrivacy, that are relevant for privacy decision- making and automation, I think that would be really helpful.

Victor Morel:

Yeah, so the GDPR specifies a few requirements, notably for consent, because consent is one of the type of privacy decisions that we deal with in our paper. So, it states that consent has to be informed, specific, freely- given and ambiguous, and also that it entails a clear statement and affirmative action. It basically means that you can't just like fully automate consent. This is like one important requirement. It also says that you have to provide the possibility to withdraw consent, and it should be as easy to withdraw as it is to give. It also specifies a few things when it comes to 'explicit consent.' For instance, you have to ask for an explicit consent, so like a stronger version of it, when you deal with sensitive data, so philosophical data about religion, et cetera, et cetera, when you want to automate a decision for profiling, for instance, and also when you have to transfer data to countries without adequate safeguards (which, still nowadays, include the U. S. because of the Schrems II court case that invalidated privacy transfer agreements between the EU and the U. S.

Simone Fischer-Hübner:

So, it's needed for so-called third countries that transfers to countries outside the EU. The GDPR actually includes further requirements that are important in regard to automating privacy decisions. For instance, the principle of privacy- by- design and default. So, by default, the most privacy-friendly installation settings should be installed. So, this also has impact on how far we can automate privacy decisions. Yeah, then there are also further rights to object, for instance, to opt- out, the right to object to direct marketing and profiling. And then, in addition to the GDPR, we also have the e-Privacy Directive, which is now under discussion to be replaced by e -Privacy Regulation. So, the Directive is governing the electronic communications sector; it is more specific than the GDPR, and in particular it also regulates cookies or tracking and technologies, and here also requires consent by the data subjects. So, this has led to all the cookie banners and consent requests that we are confronted with daily. So, here the ePrivacy Directive, or future Regulation, also plays an important role.

Debra J Farber:

Thank you for that. I think both of you really helped give a summary of, I guess we call them the regulatory requirements, constraints for as you're building out this classification schema. From your paper, your approach to this research seems to be along two dimensions for the classification scheme. The first is the Type of Privacy Decisions that you categorize. Then the second is their Level of Automation" whether it's manual, semi-automated or automated. Since your focus here seems to be a user's ability to control disclosure of their personal data and the conditions for that processing of that data, can you elaborate on this approach and your research?

Victor Morel:

Yeah, our focus is not only on the user's ability to control their personal data, but it's also about usability, because we want to provide meaningful control. There might be a tension between the usability and the lawfulness between the two. We're trying to clarify the terms of the debate here, which turned out to be not so easy. We first started to look at consent, and this is also why we have such a focus on it in our paper, and then realized that not everything is just about consent. There are privacy permissions, privacy preferences, and when you withdraw consent or when you opt out of data collection, in a way, it's not exactly consent. It doesn't call for the same legal requirements. It's about control and it's about usability in a way. Simone, maybe you want to add something.

Simone Fischer-Hübner:

Yes, we elaborated decisions which are related to consent and the controlling to this closure and processing of personal data. Here, of course, privacy permissions are also important decisions. We are also confronted when we are installing apps, for instance on mobile phones, to set privacy permissions. This could be combined with the consent, but there can be also other legal grounds for privacy permissions - for instance, on the legal basis of a contract for an e-banking app. There might be an overlap with consent, but it might be also not be based on consent.

Simone Fischer-Hübner:

Privacy permission settings is another type of privacy decision. Then, there might be also privacy preferences settings, which are different to privacy permission settings, because privacy preference we defined rather as an indication of privacy choices, but it's only indicating the preferences of a user without the privacy permissions set. Privacy permissions are more like access control. All settings and privacy preferences are just the indications of what the users wish. There are also different tools for indicating privacy preferences that have been developed in the past and currently so-called "privacy preference languages. For instance, like in the past, p3p is a platform for privacy preferences where users could also indicate their preferences.

Simone Fischer-Hübner:

Then, you have a website that states the privacy policy, so both the privacy preferences and the privacy policies of the websites are stated in machine-readable format and then they can be matched so that you can automatically detect how far the user's preferences matches with a website's policy and any deviations could then be noticeably displayed to the end user to contribute to user transparency, so that the user doesn't have to read the whole privacy policy statement but just get very noticeably the deviations of his or her preferences from the site's policy displayed. So, these are privacy preference settings. So, it's another type of decision in addition to consent and privacy permission settings. And then, as Victor said, decisions to reject consent or revoke consent or to object. So, these are also another type of privacy decisions that we can make.

Debra J Farber:

Yeah, and so that's really helpful, and thank you for giving us some illustrative examples. Just to sum it up, I think (for the audience here) that there are four different types of privacy decisions that your paper talks about: privacy permissions, privacy preference settings, consents to processing, and rejection of processing. Is that correct? Yes. Awesome. Did you have anything to add to that, Victor?

Victor Morel:

Yeah, I just had a few examples in mind. For instance, mobile app permissions are a good example of what privacy permissions can be. This is something we're confronted with every day, basically. Typically on Android, you can be asked to regulate which data will be accessed by which app, and this is a typical example of a privacy permission which, unlike a privacy preference, has to be enforced. I mean, it's like a binding decision. So, for instance, a privacy preference will be just something that you will indicate but might not necessarily be taken into account by your system. I think one old now example is DNT, which stands for Do Not Track. I think you mentioned it in previous episodes. So, people were able to define their preference" I want to be tracked or not, but websites could also choose not to take that into account. This is the main difference between privacy permissions and privacy preferences. One is binding and the other one is not.

Victor Morel:

And then we also dig into consents.

Victor Morel:

Cookie banners are a good example, but we're also interested in IoT settings because I think it's going to be a big thing at some point that you will have to deal with the disclosure of your data in a legal way, which is the main thing about consents.

Victor Morel:

It has a direct legal implication and also what we call under this umbrella term: reject - so opting out. Interestingly, for instance, GPC, which is enforced in the U. S. by the CCPA in California, is a good example of a 'reject decision' because it's an opt-out. So, we can't really say that it's about consent. Consent is more of an opt-in and has a very clear and precise definition, at least in the EU, and GPC will not be considered as consent under our framework for a good reason, but GPC will be considered as a reject because it's an opt-out decision. So just a few examples to maybe give the audience a feeling of what we're talking about is not just academic work. Even though we're also interested in research projects in the EU, the U. S., etc., but there are also concrete things that we're discussing in the paper.

Debra J Farber:

Absolutely. I'll add a link to the paper in the show notes so that folks can go ahead and read that. It's not too long - it's only nine pages - but it's got a lot of really great info in there. In fact, you evaluate each type of privacy decision and then the implications of various levels of automation on those decisions - so manual, semi-automated, and fully- automated. Please tell us about some of your findings when it comes to these different levels of automation.

Simone Fischer-Hübner:

So, manual decisions do not raise any legal issues. They are quite straightforward. That is what we anyhow do every day. However, of course, there are usability challenges, as discussed, because doing all the settings - permission settings or giving consent - always manually requires a high cognitive load from the user and a lot of time. There has been also research that just reading all the privacy policies will take several days a year. So, users simply do not have the time to really read all the policies to do well informed decisions. So, in summary, manual decisions do not raise any legal issues,

Simone Fischer-Hübner:

Semi-automated decisions are those where decisions are made at one time upon dynamically- created requests or reacting on dynamically- created recommendations. So, here recently a lot of research has been done on so-called 'personalized privacy assistants' that, with machine learning support, can predict the user's privacy preferences and then can dynamically suggest changes to permissions so that the permission settings are fitting to the real preferences. So, the user is somehow nudged in a way to change the permission settings. This can also raise some ethical issues because privacy nudges, even where it follows a positive intention, can also impact the user's autonomy and could manipulate the user. So, there are also some discussions around ethical aspects. However, still they can help users to make decisions that are better matching the preferences and also simplify making decisions. So, they have surely advantages for usability.

Simone Fischer-Hübner:

And when it comes to consent, there has been also work on dynamic consent, where consent is evoked. For instance, purposes for data processing are changes or if a context appears that the data becomes sensitive - so-called 'special categories of data' and pursuant to the GDPR (for instance, location data about the current location of the user) can indicate that the user is visiting a hospital or a church. So, the data becomes medical data or data related to religious beliefs. Then, for special categories of data, explicit consent is required, so this could be evoked. Or, consent requests could be evoked dynamically. Yeah, so we could also have dynamic consent as a form of semi-automated decision- making. So, there are different forms for semi-automation.

Simone Fischer-Hübner:

But here, our findings are also that it mostly conflicts with the GDPR because for consent, you need an affirmative action; so, consent cannot be given automatically. And also, for privacy permissions and privacy preference settings, if they are set automatically, they might contradict with default settings that implement privacy- by- default, so they might change the privacy- by- default settings in a way that they are more generously allowing data processing. So, it's not the most privacy- friendly option. Here, we have then a conflict with Article 25 of the GDPR: the principle of privacy- by- default. Our conclusion was also that actually, for automated decision- making, this only works for the decision to reject. That means, for instance, for revoking consent and for opting out, this can be done automatically in line with the GDPR, but all other decisions are problematic to be done automatically.

Debra J Farber:

That's a really good summary of the research. Do you have anything to add to that, Victor?

Victor Morel:

Yeah, about the full automation of privacy decisions, it surprisingly connects to another paper we recently published with Piero Romare and Farzaneh Karegar, still in the same project: the CyberSec IT project. We were interested in what were the preferences and concerns of users about a certain type of IoT trigger action platforms, such as IFTTT, which provides applets to basically automate some of your decisions. So, you would link your IoT device to connect with your cloud service, et cetera; and people were not really happy about full automation of their personal data. So, they were interested by the automation aspect, but they still wanted to be in control.

Victor Morel:

So there are their legal requirements, but there are also expectations of everyday people; and basically, people want to stay in the loop. They want to have some facilitation in their decision making, but they don't want to be left out. So, it's interesting to see that it is actually also a thing from a user usability point of view. But, yeah, people want to be helped with their decision. They want to feel in control, in a way. This is something I wanted to add.

Simone Fischer-Hübner:

Yeah, and indeed there is also research from the U. S., and this proposal is to automate consent or permission settings but they also refer to research by others have shown that users would like to stay in control. And so, there are technologies proposed for fully- automating consent or other decisions - especially from the United States - but, in the European context you can question whether this is legal in reference to the GDPR.

Debra J Farber:

From a socio-technical perspective, I think it's fascinating because it's almost like a sense of a bigger idea of autonomy: we don't want people making decisions about us. We want to understand how decisions are made.

Simone Fischer-Hübner:

Research has shown that with 95% prediction, you can predict the privacy preferences of a user. So, automation probably can work very well. And, there's also research that if you have automation - and the result will probably be that you have privacy permission settings that are better at meeting your privacy preferences than if you let the user make the decision under the circumstances - that he or she usually does not have the time and dedication to really make well-informed decision. However, I stand, 95% sounds very good, but there's also 5% where there are deviations and the system, besides that, I think, automatically is that you do not agree to; so, you will be very unhappy. I can also see that users still would like to have control.

Debra J Farber:

Right. So, based on your findings, at a high- level, which conditions enable the automation of privacy decisions while complying with regulations, and which ones are just like, "We should never use these because they'll never comply?" Is it just full- automation? Is that the answer? Or, do you have something else to add there?

Simone Fischer-Hübner:

Yeah, fully automated, except for the decision type: reject. So, when you reject to give consent, revoke consent / opt out, that can be done fully- automatically and still comply with the GDPR.

Debra J Farber:

Oh, that's a really good call out.

Victor Morel:

Yeah, and actually the GDPR even mentioned that it could be interesting to exercise your right to object to profiling, for instance, using automated means. So, it kind of goes in our direction that it should be possible to fully automate the withdrawal of consent, opting out, and everything like this. But, this is basically the only use case, and this is also why we created this weird category "reject in which we include opt out, withdrawal and right to object - because this is the only good use case in which it is interesting to fully- automate the decision in that case. And, also because you don't disclose any data; you only prevent the disclosure of data. So, it's like a negative definition in that specific case.

Simone Fischer-Hübner:

Yeah, so the GDPR only regulates how you have to give consent, but not how to reject consent; and actually there are also several tools for reject, like Consent O Matic, right, that rejects cookies by default.

Victor Morel:

Yeah, we found, indeed, a few extensions for web browsers that will help with decisions with cookie banners, and some are arguably not compliant with the GDPR because they will consent on your behalf, and this is not okay. But, some will take a more privacy- preserving approach and will only dismiss basically the cookie banners - such as Consent O Matic, which has been designed by colleagues in Denmark, and this web extension will look for all of the cookie banners and will dismiss everything. So, this is actually fine to do so, even in a fully- automated way, because you won't disclose any data to the website.

Debra J Farber:

Oh, that's fascinating. So, to what extent can automated privacy decisions promote informed control in line with the GDPR to benefit of users? I guess what I'm asking for here is best practices for informed control. What would you recommend?

Victor Morel:

I think an interesting line would be to provide tailored recommendations. So, this is a form of automation, in a way, because you basically feed some data but you don't take decisions on behalf of users. You just help them to make an informed decision. So, we think it's an interesting way; it fits typically in the semi-automated category that we designed. We also surveyed some artificial agent that will provide an automated negotiation of privacy decision and it can be done in a way that you will never disclose data unknowingly. So, this is also a very interesting line of approach. And finally, we surveyed some work that will provide requests on- the- fly. It's also a form of automation. Maybe, Simone, you can say a few more words, because you work on projects that involve this on- the- fly privacy decision.

Simone Fischer-Hübner:

Yes, so this is basically also part of the semi-automated decision category.

Simone Fischer-Hübner:

For instance, the user acts or makes a decision which does not match his or her privacy preferences that were previously declared, then the user would be asked whether the user wants to anyhow give consent or reject consent; and then, if the user decides at the same time, the user can be asked on- the- fly whether the user would now like to update his or her privacy preference settings,

Simone Fischer-Hübner:

And, we also consider to implement that for privacy permission settings in the context of IoT trigger and action platforms, that also here, on- the- fly, the user can be asked to change permission settings.

Simone Fischer-Hübner:

For instance, in the context when dynamic consent is evoked because there are grounds to require dynamic consent. So, the user has previously given consent, but now the situation where dynamic consent is required - for instance, if the data suddenly becomes sensitive or, as Victor also elaborated, data is transferred outside of Europe, or if the data is used for profiling - then the GDPR requires dynamic consent; and in this context, the user can also be asked whether (depending on how the user answers), whether the permission should be changed accordingly. So, they are set on- the- fly. So, then the user does not need to be bothered constantly with setting permissions. But, the user can start with privacy by default permission settings, which are then updated on- the- fly. That means the user is asked whether the settings should be changed, in the context, when the user is anyhow asked to make decisions.

Debra J Farber:

That makes sense. So, in your paper, and here, we're talking about privacy decisions and automation across three separate technologies: web, mobile and IoT - how should organizations think about achieving usable and transparent privacy with automation across technologies through a comprehensive approach?

Victor Morel:

I think that, first of all, it's important to understand that if you want to achieve usable and transparent privacy, you can't just check a list and think that you're done with it. I would say that it's a process and you have an overarching principle like, typically, privacy by design and by default, and you also have a very concrete indication that, well, you can't just fully automate privacy decisions because most of the time it goes against these principles and legal requirements. And then, you have this blurry zone in between - semi-automation that - you actively have to think about and what could be done like optimally would be to conduct user studies Is it actually usable? Do people think that it improves their decision-making or not, or the usability point of view? And also, if you want to assert compliance, you would probably have to discuss it with a DPO (with a Data Protection Officer) that will tell you whether you're actually compliant with your local jurisdiction or not.

Victor Morel:

But yeah, it's a complicated process. You actively have to think about every step. Also, every situation is different because in the IoT you don't necessarily have interfaces with which you can provide information and therefore make an informed decision, unlike the web, because if you access the web, it's through a browser and you have a big screen so you can actually know what's going on and therefore provide information. So, every case is different and you have to assess it. It's like security you can't just check a list and think that you're done with it. You have to reflect upon every step.

Simone Fischer-Hübner:

Yeah, I would say, as discussed, fully- automating privacy decisions raises legal concerns, except for the decision to reject. Manual privacy decisions are in line with the GDPR; however, they lead to usability issues because users do not have the mental capacity to make so many decisions and be well- informed. So, usability issues, in turn, lead to decisions that are not well-i nformed and do not necessarily meet the user's preferences. So, therefore, the middle way - semi-automation - is probably the best way to go, and you have to find suitable means for semi-automatically supporting privacy decisions while meeting legal requirements of the GDPR. In our papers, we also provide some examples for such semi-automation.

Debra J Farber:

Excellent. Now, I know from your paper that your research is illustrative and you're going to be adding, as there's new technologies, you're going to be adding to the categorization; so, it's non-exhaustive and you have plans to do a next phase of this research, maybe around IoT. I'd love to learn more about what you have planned.

Victor Morel:

Yeah, indeed. We plan to work on the IoT because that was initially the reason why we started this paper - because we want to build a privacy assistant for the IoT and specifically for a trigger action platform that use IoT devices. So trigger action platforms, like I mentioned before, can connect every device and services, so it can be a lot of decisions to make for a random person, even for a knowledgeable person, I would say. So, the idea would be to build this privacy profiles according to longitudinal study, so that you will have your predefined profile, like your Privacy Pragmatic or Privacy Guardian, for instance; and according to that, you will have predefined choices about your privacy decisions. So, it's always about empowering people, about supporting their decision by providing better and informed notices, for instance, and then we will combine it with another kind of automation - with on- the- fly privacy permissions, so that people are not burdened too much with the decisions, but only when needed - when required. Yeah, we're trying to like design it to solve this tension basically between usability and lawfulness in this kind of environment specifically. This is basically the project for the upcoming year. We'll see how it goes. We have also another interesting track because, as you said, it's not meant to be comprehensive. Actually, we're also trying to build a systematic literature review now about privacy decisions and their relation to automation.

Victor Morel:

We're starting to survey like over 100 different papers related to privacy permissions, privacy preferences, consent, and reject and see what has been done in the past: whether they were accurate or not; whether they were using machine learning, complex models, or simple rules; which environments; what was the source of the data for the automation; how it was automated; etc. So, we're trying now to be exhaustive in a way. So, it's still like a very preliminary work. We don't have much results except that we interestingly found that many, many papers were drafted in the 2010's about recommendation systems for social networks. So, it's not so novel what we're trying to achieve. We're trying to do it in a different setting, which is novel in a way, but people have been trying to do that before, and they were not always successful, let's say. But yeah, this is one of the main funding that we have now, about the comprehensiveness, let's say, of the study. Simone, if you want to add something.

Simone Fischer-Hübner:

Yes, it was nicely summarized. I can also add that we conducted, with further colleagues from Chalmers and Karlstad - also three focus groups - to derive also qualitative research results about the user's preferences and concerns for IOT trigger action platforms, and the results will also allow us to implement a semi- automated approach for machine learning supported prediction of privacy preferences, which off course has to be done in a privacy- preserving manner and can then be combined with on-t he- fly privacy permission management - so an easy, automated approach for asking users whether they want to revise the decisions in a context when they anyhow need to be asked to do decisions.

Debra J Farber:

Thank you. So I know that you had done some research when it came to browsers and permissions and settings, and even we're taking a look at Mozilla. Can you tell us a little bit about your work there and any calls to action?

Victor Morel:

Yeah, because when we started to look at all the types of privacy decisions and we realized that, when it comes to consent, people have been trying to design tools to automate privacy decisions - sometimes in lawful ways, but not all the time - we found out that Mozilla was providing, in its better version, a way to automate the cookie banners management.

Victor Morel:

So, I do think that Mozilla is trying to do a good job when it comes to privacy, so I don't want to bash them here. But, we also found that they're providing a solution that will basically click on your behalf on "es I consent to cookie banners if they can't find the solution, and these goes against the GDPR requirements. So, I tried to contact them - the legal team - notably to try to help them out with this and so that they could go for the most privacy- preserving solution. Unfortunately, they haven't answered yet; so, if I could profit from this episode to reach out to Mozilla so that they can actually make the right choice and not go down the slippery slope, which will basically make this web browser click on "es I consent on every type of cookie banners, that will be great.

Debra J Farber:

Let's see if this platform can get you in front of someone at Mozilla who can answer the call. So, I think that's a really noble goal. Thanks for sharing that. Do either of you have any last words of wisdom regarding this research, anything you'd want to share with my audience of privacy engineers, or any open source projects or conferences or anything?

Simone Fischer-Hübner:

Just that usability for privacy is one of the most difficult areas to address. So, we have very nice privacy- enhancing technologies, and the same works for security, so they're a good security solution, but I think the major challenges are the human factors and how to make privacy usable. However, also the GDPR requires usability, for instance Article 12, because you can only achieve transparency and form decisions with usability; and, I think our research hopefully contributes to this end. But, there are still a lot of challenges remaining. So, it's just a kind of first categorization and indications of what directions to go, but still a lot of research needs to be done. So, maybe that is my final word.

Debra J Farber:

Awesome. So if anybody is working on usability versus lawfulness research and wants to reach out to you, what's the best way to reach out to you both? Is it via LinkedIn? Should I just drop emails in the show notes?

Simone Fischer-Hübner:

I think you can put a link to our paper, and there should be our email.

Victor Morel:

Yeah, email address. Yeah, I stopped using LinkedIn because of privacy issues, let's say. But, we also kept the website of the project updated. So, if you want to stay informed about what we're doing in the cyber security project, you can just visit the CyberSecIT website and you will have all of the links to our papers, which are always published in open access and links to any kind of news. It's always going to be on the website and the link, I think, will be provided in the description.

Debra J Farber:

Simone and Victor, thank you so much for joining us today on Shifting Privacy Left. We had a great discussion on the tension between usability and lawfulness and the automation of privacy decisions. So, until next Tuesday, everyone one will be back with engaging content and another great guest - or guests. Thanks for joining us this week on Shifting Privacy Left. Make sure to visit our website, shiftingprivacyleft. com, where you can subscribe to updates so you'll never miss a show. While you're at it, if you found this episode valuable, go ahead and share it with a friend. And, if you're an engineer who cares passionately about privacy, check out Privado: the developer friendly privacy platform and sponsor of the show. To learn more, go to provado. ai. Be sure to tune in next Tuesday for a new episode. Bye for now.

Introducing Victor Morel, PhD and Professor Simone Fischer-Hübner, PhD
Discussion on the Types of Privacy Decisions identified and Levels of Automation of those decisions - determining whether each is lawful and provides individuals with meaningful control (usability)
Simone's & Victor's findings around the different levels of automation for each type of privacy decision: manual, semi-automated, and fully-automated
We discuss under which conditions, organizations should enable the automation of privacy decisions while complying with regulations, and which ones should not
We discuss best practices for informed control
Simone & Victor give explain how organizations should think about achieving usable and transparent privacy with automation across technologies through a comprehensive approach
Victor explains the next steps for there research, which will focus on the lawfulness and usability issues of automating privacy decisions in the context of IoT technology
Victor share's Mozilla's approach to automated cookie banner management, and why it's problematic (i.e., unlawful)

Podcasts we love