The Shifting Privacy Left Podcast

S2E9: Personalized Noise, Decaying Photos, & Digital Forgetting with Apu Kapadia (Indiana University Bloomington)

Debra J. Farber / Apu Kapadia Season 2 Episode 9

In this episode, I'm delighted to welcome Apu Kapadia, Professor of Computer Science and Informatics at the School of Informatics and Computing, Indiana University. His research is focused on the privacy implications of ubiquitous cameras and online photo sharing. More recently, he has examined the cybersecurity and privacy challenges posed by AI-based smart voice assistants that can listen and converse with us.


Prof. Kapadia has been excited by anonymized networks since childhood. He has memories of watching movies where a telephone call was being routed around the world so that it became impossible to trace. What really fascinates him now is how much there is to understand mathematically and technically in order to measure that amount of privacy. In more recent years, he has been interested in privacy in the context of digital photography and audio shared online and on social media. His current research is focused on understanding privacy issues around photo sharing in a world with cameras everywhere.

In this conversation, we delve into how users are affected once privacy violations have already occurred, the implications of privacy of children when it comes to parents sharing photos of them online, the fascinating future of trusted hardware that will help ensure "digital forgetting," and how all of this is a people problem as much as it is a technical problem.


Topics Covered:

  • Can we trick 'automated speech recognition' (ASR)?
  • Apu's co-authored paper: 'Defending Against Microphone-based Attacks with Personalized Noise'
  • What Apu means by 'tangible privacy' & what design approaches he recommends
  • Apu's view on 'bystander privacy' & the approach that he took in his research
  • How to leverage 'temporal redactions' via 'trusted hardware' for 'digital forgetting'
  • Apu’s surprising finding in his research on "interpersonal privacy" in the context of social media and photos
  • Guidance for developers building privacy-respective social media apps
  • Apu's research focused on cybersecurity & privacy for marginalized & vulnerable populations
  • How we can make privacy & security more 'useable'

Resources Mentioned:

Guest Info:

Send us a text



Privado.ai
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.

Shifting Privacy Left Media
Where privacy engineers gather, share, & learn

Buzzsprout - Launch your podcast


Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Copyright © 2022 - 2024 Principled LLC. All rights reserved.

Debra Farber  0:00 
Hello, I am Debra J. Farber. Welcome to The Shifting Privacy Left Podcast, where we talk about embedding privacy by design and default into the engineering function to prevent privacy harms to humans, and to prevent dystopia. Each week we'll bring you unique discussions with global privacy technologists and innovators working at the bleeding edge of privacy research and emerging technologies, standards, business models, and ecosystems.

Debra Farber  0:27 
Today I'm delighted to welcome my next guest Apu Kapadia, Professor of Computer Science and Informatics at the Luddy School of Informatics, Computing and Engineering at Indiana University. He received his PhD in computer science from the University of Illinois at Urbana-Champaign, and for his dissertation research on trustworthy communication. He received a four year high-performance computer science fellowship from the U.S. Department of Energy. Following his doctorate, Professor Kapadia joined Dartmouth College as a postdoctoral research fellow with the Institute for Security Technology Studies and then a member of technical staff at MIT Lincoln Laboratory. Professor Kapadia is particularly interested in usable and human-centered security and privacy. His research has focused on the privacy implications of ubiquitous cameras and online photo sharing; and more recently, the cybersecurity and privacy challenges posed by AI-based smart voice assistants that can listen to us and converse with us.

Debra Farber  1:32 
Welcome, Apu.

Apu Kapadia  1:33 
Oh, thank you for having me.

Debra Farber  1:35 
Oh, I'm so excited for you to be here. You're actually the first professor I've had on the show. So, I really look forward. Oh, nope. That's actually incorrect. I had Professor Lorrie Craner on.

Apu Kapadia  1:47 
Oh, that's great.

Debra Farber  1:48 
Yeah. So of course, like, you know, the mother of privacy engineers there. But besides Lorrie, you're the first researcher who I've had on independently not talking about the profession of privacy engineering and moreso the research that's being done in some novel areas. So welcome. To start off, could you just tell us a little bit about your current research?

Apu Kapadia  2:11 
Yeah, so for the last several years, I've been very interested in privacy in the context of digital photography and online social media. So, for example, somebody could just take an embarrassing photo of you; and then the next thing, you know, it's gone viral on social media. So, my current research is focused on understanding privacy issues around photo sharing and, you know, cameras everywhere, including wearable cameras like Google Glass from several years ago, if you remember that one.

Debra Farber  2:40 
Sure.

Apu Kapadia  2:41 
And there's so many interesting questions here. What makes a photo sensitive to begin with? There's so many reasons why somebody might say, "Oh, don't share this photo of me." So, you probably can't capture all possible reasons, but we could make a pretty good start at it by trying to understand how people think about photos, what makes them sensitive; and then from a technical perspective, well, then can we detect those things in q photo. So, that's been one aspect of the research.

Apu Kapadia  3:09 
Another aspect has been to understand what actually motivates people to share these photos. So this is, you know, ongoing work with psychologists, for example. And once you understand why people want to share this photo, you can then try to influence them. You could try to nudge them into maybe not sharing the photo or maybe applying some redactions or stickers to their photos. So, that's the general space, I'll just give you one interesting example of recent work that highlights this importance of considering the human dimension, which is what usable security and privacy is all about. When you build technical solutions, you also need to consider the human or the user.

Apu Kapadia  3:51 
We find that just popping up a warning, for example, saying "Are you sure you really want to share this photo? What about the privacy of the person in the photo?" That might at least, you know, makes people think for a second time and be less likely to share a photo; but, we consistently found in our experiments that when we pose the question, the warning tech question to say, "Considering the privacy of the person in the photo, how likely are you to share this photo?" they were typically more likely to share the photo after seeing a privacy warning. So this...in other words, the privacy warning backfired and it made people more likely to violate someone else's privacy. So this is a really surprising finding kind of showing how an engineer, a computer scientist might say, "Yep, stick a privacy warning up." But then you study it with humans using social science techniques and you find out that the opposite might actually be true. And we still have more work to do and understanding why this is the case. We have some ideas. So that's that's the whole line of work related to photos and privacy.

Apu Kapadia  5:00 
And pivoting from photos now, I've become very interested in in audio. So, I've been exploring security and privacy issues related to audio and voice. You know, the AI-based smart voice assistants are now ubiquitous. They're listening to us in our homes. Some people have their microphones on all the time. They are getting to the point where they can converse with us. So in general, I've been very interested in the security and privacy challenges raised by this voice-based interaction between smart voice assistants and people.

Debra Farber  5:34 
Thank you so much for that. You know, I'm excited to dig into some of that research. It's definitely an area that I had concerns, you know, myself just as a user of social media and a user of voice assistants. I have Echoes in my home, several of them; but even as a privacy expert, you know, there's tradeoffs that you make when you make decisions about what you bring into your personal space. So, I definitely have them and I look forward to unpacking some of this with you. First, I'm going to talk about in opposite of how you introduce the topics, to set up my next question, I'm going to read part of an abstract for paper he co-authored, "Defending Against Microphone-based Attacks with Personalized Noise."

Debra Farber  6:19 
So, "Voice-activated commands have become a key feature of popular devices such as smartphones, home assistants, and wearables. For convenience, many people configure their devices to be 'always-on' and listening for voice commands from a person using a trigger phase phrases such as, 'Hey Siri,' 'Okay, Google,' or 'Alexa.' However, false positives for these triggers often result in privacy violations with conversations being inadvertently uploaded to the cloud. In addition, malware that can record one's conversations remains a significant threat to privacy. Unlike with cameras, which people can physically obscure and be assured of their privacy, people do not have a way of knowing whether their microphone is indeed off and are left with no tangible defenses against voice-based attacks." So, if you can, please tell us about the approach you took to solve for these problems and the results of your research.

Apu Kapadia  7:13 
Indeed, how do we know whether a microphone on a device is actually off? Many of us probably put stickers on our cameras, and we know that the camera can see us. But with a microphone, some people actually might try to do that with a piece of tape, but a tape is not enough to muffle a microphone like you would cover a camera. So, we set out to find an analogous solution. So something simple, hopefully, that would be similar to applying a tape to camera. What's the equivalent for a microphone? We didn't have clear answers at first, but then we realize, "Oh, what if we could just play noise into the microphone, like babbling gibberish into the microphone?" It's a fun, fun idea, you know, but does it really work?

Apu Kapadia  8:01 
And so how do you evaluate whether such a solution works, right? So, we consider two situations. One is Automated Speech Recognition, which is called "ASR." So, when you're dictating text to your phone, for example, and it's transcribing it, it's using ASR, which is automated speech recognition. So, can we trick ASR algorithms so that they can't actually listen to us? It turns out that there's some fun ways to do this using noise that's even imperceptible to humans and somehow it just tricks the ASR into completely failing or transcribing incorrect text. However, with all these approaches, a human can still...a human eavesdropper can still listen to what's being said, and eavesdrop on your conversation, having access to your microphone. So basically, we wanted an approach that tricks both human-based eavesdroppers and AI-based eavesdroppers, these ASR systems.

Apu Kapadia  9:02 
Anyway, after a long series of experiments, looking at different scenarios - where does one fail versus the other not fail and so on - we ended up with a combination of babble noise, or babble speeches, if you want to call it that that could trick both situations: humans and ASR systems. In a nutshell, this resulted in eight voice tracks; you can imagine eight conversations being overlaid on top of each other - eight random conversations being overlaid, and then applying a technique called 'voice conversion.' This is where you could take speech said by someone else and convert it to sound like I was saying it. So, this is a whole line of research, and there are many different ways to try to do this and different levels of effectiveness. But, we apply this technique to combine eight speech tracks, voice convert them, and then we found that these work the best. And both ASR systems as well as humans could not really tell what was being said in the vicinity of the microphone.

Apu Kapadia  10:06 
So what we showed in the end was that it is feasible for lay users (if they had access to babble noise, a babble noise device or tracks) they could just place their headphone that's playing this babble noise next to the microphone of say, your phone, or an Alexa like a smart voice assistant Alexa-like device, and then anybody with access to the audio coming through that microphone would not really be able to say, "Tell me what was being conversed about in that room."

Debra Farber  10:39 
So, I know that there have been attempts at creating noise that would prevent these devices from picking up audio - maybe by injecting some babbling, you know, sounds into it. Is the uniqueness of your research here the fact that it also would prevent human beings from understanding what is being said in those rooms, not just other devices that....

Apu Kapadia  11:03 
Yeah, exactly. It was the studying of both situations simultaneously. That was not clearly understood. I feel like we have anecdotal evidence that, you know, "Oh, just play music into a microphone and the other side will be confused.' But using automated techniques, the state-of-the-art techniques to remove noise and to isolate conversations, you can recover such conversations. So, we set out to study, based on the latest technology, what is possible and what isn't possible. We were actually maybe expecting the ASR systems to do a pretty good job at picking out the conversation, but we ended up finding a combination of defense's that then actually worked with modern systems.

Debra Farber  11:51 
Do you want to briefly mention what those defenses are?

Apu Kapadia  11:53 
Oh, no. It was basically the idea of combining some number of tracks; too few may not work and too many tracks may not work either.

Debra Farber  12:02 
Oh, I see.

Apu Kapadia  12:03 
If you overlay, say 1,2, 8 tracks, they ended up sounding like noise - like white noise. And then, you have good noise-removal algorithms that then you can actually end up hearing the conversation, it was a trial and error approach at finding the right set of parameters, including realizing "Oh, the voice conversion actually does help because now, not only the eight tracks that possibly sounded like someone else, allowing you to isolate the conversation of your target; now, if you actually do some voice conversion, now becomes harder / impossible to eavesdrop on your target.

Debra Farber  12:45 
Fascinating. And then, did you get...like mathematically get to the number 8 for the number of tracks?

Apu Kapadia  12:51 
It was just experimentally. To evaluate the effectiveness, for example, of our approach trying various number of tracks, the metric we use that the end was the "word error rate." So, if you played the full noise track along with the embedded audio (that's the person is trying to eavesdrop on...the attacker is trying to eavesdrop on) and they try to transcribe what the person is saying, we calculate the fraction of correct words to the total number of words. So, that gives you the "word error rate." So, for example, if the word error rate is 10%, that means 10% of the time you're not hearing the words, right? You're confusing those 10% of the words with something else, and so that's the metric we use to evaluate. So, we find very, very high word error rates when we had eight tracks overlaid on top of each other with the voice conversion.

Debra Farber  13:46 
Give me a sense of what high would be in this context. Like, are we talking like 99%?

Apu Kapadia  13:51 
Yeah, yeah, 95 - 99%. We're trying to make the conversation practically impossible to understand.

Debra Farber  14:00 
That makes sense.

Apu Kapadia  14:01 
I mean, that would be the goal, but I would say anything 90% and above would be pretty good. So you can hear some words once in a while, but not enough to make sense of the conversation.

Debra Farber  14:13 
And in this paper, your team argues for better microphone designs for "tangible privacy." Tell us what tangible privacy is in your view, and what design approach do you propose?

Apu Kapadia  14:26 
Yeah, so as we mentioned earlier, this example of covering a camera. This is something you do by hand; it is this tangible sense of touch. And it also gives you the strong assurance of privacy: you just know that this camera cannot record you. And, with the microphone, you may be able to physically manipulate the microphone in a certain way, but you don't have the strong assurance that the microphone cannot hear you. And so these are what we call "tangible mechanisms of privacy" where the two things that you need to have. First of all, you physically interact with the device, so you really get a sense of the manipulation that you're doing on the device; and, then after that control or manipulation, you then end up with a strong assurance of privacy. And so these are two properties we look for - are you giving enough control and enough feedback to the users to know like, "know" that they're not being seen or not being heard?

Apu Kapadia  15:28 
So, for example, if you have a smart voice assistant and you flip the mute button, you are physically manipulating the device, but then you don't really know what's happening within the device, and is it really...has really cut off the microphone or is it just a software disabling the microphone? Now, that's worrisome because software is not foolproof. You could have malware in there that just overrides what the software is saying and can listen in on you. And so this is where we set out to think about how would you design these physical devices? How would you design a device with a microphone that would provide tangible notions of guarantees of privacy to the owner or to somebody around that device?

Apu Kapadia  16:14 
So, the designs that we propose were, as some fun examples, right...so, when you flip that switch, you actually through a little window, can see the circuit being broken. Right? When you plug in your headphones to a device, can we kind of literally show that action taking place when you flip the mute button. And then, that will give people a very tangible sense of privacy because not only did they manipulate the mute button physically, they even saw the connection being broken. So, that's just an example of what we're proposing that these devices, the way they're built, they should have a hardware-based disconnect of the microphone; a software-based disconnect is not enough.

Debra Farber  17:05 
Fascinating. So, you also talk about tangible privacy in a different paper where you argue for using tangible privacy, user-centric sensor designs, for bystander privacy. Can you describe what you mean by "bystander privacy" in this sense and then the approach that you took?

Apu Kapadia  17:26 
Yeah, so by bystander, we simply mean a person in the vicinity of someone else's device. So they're not the owner, but they're just some bystander who's being captured, recorded by that device, right? So if you go to somebody else's home, you may or may not be familiar with the controls of this device. What is this device? How does it work. You don't have a clear understanding if you're being recorded or not. So, in that case, we would call you, a bystander.

Apu Kapadia  17:55 
So, this work was actually a follow-up work to the one I previously mentioned, where we just understood this notion of tangible privacy and the need for it. So that was, we understood this need for tangible privacy through interviews with people that then inspired some suggestions for design, like the one I mentioned about breaking the surface. But in this work, we actually experimentally tested out different scenarios of design. And so what we did was we looked at combinations along two axes. So, you had physical and software-based controls. So, an example of a physical control is like the mute switch or a virtual button (that's the software control: a virtual button to mute the microphone). And then, on the other axis, manipulating physical versus software-based feedback. So, the physical feedback is the example of you seeing the connection being broken. And the software-based feedback is just the device telling you, "Yep, your microphone is muted," just by drawing a line through the microphone symbol.

Apu Kapadia  19:03 
And so, we conducted experiments across all these scenarios to see which was better. And, as one might expect, we did find that the physical controls improved people's sense of privacy. So that's what we were hoping to find, or we expected to find that tangible controls are important. But what's interesting is that we found that people tended to trust this physical switch. So, the feedback component didn't seem to matter. Like this idea of showing a broken circuit didn't seem to improve people's sense of privacy over not seeing it at all, and just getting the software feedback. So the really, really important implication here is that the hardware designers, the designers of these devices, they have a big responsibility here because users are expecting that when they flip the microphone switch that it is actually working, equivalent to seeing the microphone connection being broken. So, the users actually expect that there is no way this device could be listening to me. So the designers of these devices should make sure that there is a hardware disconnect taking place, whether or not it is shown visually to a user or not right, that other people, other bodies that can agencies that can verify the design of this device, right, it could be advertised as a hardware disconnect. And then, people have that assurance along with the physical manipulation.

Debra Farber  20:35 
You know, that makes a really good point where when you're looking at like new product specs - which I can't say I do very often, except if I'm like buying a new phone; I want to understand my upgrade options; or new gadget of some sorts. And you want to understand, like, do I need the more expensive one, like what's in it? But, if you're comparing specs, you're absolutely right, I want to underscore this, that companies that sell these products would really benefit from explaining how the hardware meets the user expectations of privacy. So not just security, but really putting that privacy piece in there, and not relying solely on software, and UX/UI to indicate and give comfort to individuals. So first, a call out to the hardware designers - they should be thinking in these terms. And then to product owners, you know, this is a way to earn trust. You know, who knows, maybe organizations like Consumer Reports that has a privacy and security lab, you know, maybe they'll start to compare products based on their messaging of their hardware as well. These public assurances, especially in consumer devices, I can see that as being really helpful in gaining more trust from individuals, even though their behavior...my assumption based on what you said is that their behavior shows a lack of understanding by most people how tap works. So at a certain point, you just have to trust or don't trust what you know about a product. And so, what you don't know about a product, you're not going to be thinking about...I guess you don't know what you don't know. So, that might be why the behavior didn't change if these assurances were talked about.

Apu Kapadia  22:15 
Yeah.

Debra Farber  22:16 
Okay. So I'd like to turn my attention to another area that you've written about called "interpersonal privacy," and in the context of social media and photos. So, in a paper you co-authored late last year (I believe it was in November) called "Decaying Photos for Enhanced Privacy: user perceptions towards temporal redactions and trusted platforms," you address 'interpersonal privacy' where one person is violating another person's privacy; and in this case, via social media. So I don't intend that we discussed the study, specifically. But I would love to hear about your research around 'temporal redactions' that can be applied through the use of trusted hardware, which you raised in the paper.

Apu Kapadia  22:59 
Yeah, so in this work, we were really interested in the concept of, you could call it 'digital forgetting,' right? We're putting our data out there, and it just stays there forever would be nice if devices and services forgot about our data. Now, Snapchat is a popular photo sharing application that has a feature like this, They call 'ephemeral photos.' You share a photo with someone else; they get to see it for some number of seconds; and then they disappear. It's not a strong security guarantee because people can take screenshots on their device and so on. But still, for the most part, people are excited about this feature, and it tends to work. So, we were interested in studying some other other types of temporal redactions. Like for example, if you have access to old photos that were printed, several years later, they kind of fade. So we wanted to explore if we could simulate that effect where you share a photo with someone else, they get to see it the way it was intended to be seen; or, you know, like the original photo. But, then over time sensitive aspects of those image just start degrading. So you might save that photo. You can keep that forever, but the sensitive aspects have now have faded out. So, that's that's kind of the types of redactions that we're studying. The place where trusted hardware comes in relates to what I said earlier about how with Snapchat, you can't guarantee that the photo actually was deleted because in the end, it's going to a user's device and the user has access to the software on that device. So a sophisticated user could use a modified version of Snapchat to just not delete the photo.

Apu Kapadia  24:43 
So there's been a lot of research and implementations of trusted hardware. They're starting to show up on various devices. And the bottom line is that...well, the hope is that with trusted hardware, you can run software that a remote person, a remote party, somebody else can trust. So, I can send a photo to your phone and trust your phone to do what I want it to do, like delete my photo in 10 seconds. And then you, no matter what you do, are not able to make the phone do something else. Now, it's hard to achieve this type of an implementation in practice, but secure hardware / trusted hardware gets you closer to that. And there's a lot of work on that. So, we were interested in studying the human dimension of that question. If you know that the other person has trusted hardware, how likely are you to then share more sensitive photos?

Debra Farber  25:40 
And so, would an example of the trusted hardware be that, in this Snapchat example, where, you know, I could take a screenshot, and that software on the phone that allows me to take a screenshot of a picture that's supposed to be ephemeral, but then it will persist in my photos...is the concept here that trusted hardware...like if my phone had trusted hardware and ran on it that there might be a way to, when I'm using a Snapchat, that it would disable, for instance, the ability to take a screenshot?

Apu Kapadia  26:12 
Yes, exactly. That would be one possibility. Yeah, one way that you could implement the application that when it's running under 'trusted mode,' it is going to do what it's supposed to do and you can't stop it from doing that.

Debra Farber  26:25 
What would that require from...you know, to bring something like that to market, I mean, it sounds like you'd need to get the hardware providers, whether it be phone or computer, you know, whatever device, your glasses, you know, AR glasses, you would need the hardware manufacturers to really build this in with privacy-by-design to enable those capabilities, right?

Apu Kapadia  26:46 
Yes, and the trusted hardware is showing up in our various devices. So, I'll just give you one example. You know, Intel has something called SGX Chip, and that is starting to be, you know, accepted or showing up on on different platforms and this research around how can you make use of SGX to provide such guarantees. So, there are various mobile devices, or, you know, laptop that desktop-class devices are now starting to see different types of trusted hardware being deployed. You know, this started experimentally many years ago. I can't pinpoint the exact year, but it was the mid 2000s, I believe, where the TPM chip, it was called the Trusted Platform Module that started was being deployed with certain laptops. So this has been around, it's not just a theoretical construct. So it's definitely some technology that the industry and researchers are experimenting with.

Debra Farber  27:52 
Excellent, well, I'm definitely gonna, you know, keep track of that, and then maybe even have more discussions on the show around trusted hardware. We need to get people together to realize that when you're bringing new products to market, you need stakeholders from the entire ecosystem. So for instance, I do work with the XR Safety initiative; and, in order to move things forward, in XR (extended reality) you really need all the providers to kind of come together with this. So the hardware providers, the software providers, the the app developers, the you know, everyone kind of needs to come together to understand what the requirements are, because you need them actually built into the hardware to enable the coding of your software, then that is pretty essential to nail down - what does that look like, and how do you gain that trust at the hardware level so that you can open up the ability to put in these privacy and security guarantees?

Debra Farber  28:47 
Okay, so question: what guidance do you have then for developers building privacy respective social media apps?

Apu Kapadia  28:55 
Yeah, so some of this advice may be standard, and there's a lot of advice out there. But, just to highlight some key points, the developers should follow a privacy by design approach where you consider privacy from the start. You don't just slap it on later. And there's a whole privacy by design framework. So there's a lot of guidance out there. Developers should be very transparent with users about the data that they collect; and they should, in the first place, minimize the collection of user data. They should also refrain from secondary uses of that data, which users may not have expected. And whether or not it was in the fine print somewhere, you should really be refraining from secondary uses that people are not expecting.

Apu Kapadia  29:40 
So beyond some of the standard approaches for development, I feel like there's one area where we need a lot more work in social media applications and innovation in that space, which is how do we deal with privacy violations between users once they have already...once these privacy violations have already occurred? So, removing this offending content, like photos that other people don't appreciate, you know, the text-based posts that people don't appreciate, removing this content is a complex issue that also involves social relationships in addition to technical mechanisms. And we've been doing some research on that to understand and how people are already dealing with this situation and what could be done. And we have some ideas. Like, for example, providing channels are anonymous feedback from users to the person posting this content because they don't want to damage their social relationships, or helping them craft diplomatic responses. It's kind of a people problem - as much of a people problem as it is a technical problem. And so, I would really like to see some some more innovation in this area from the developers of social media applications.

Debra Farber  31:02 
It's definitely a hard problem to solve for. I would love to see more heads put together on that topic as well. Are you working on any other privacy-related projects that you could tell us about?

Apu Kapadia  31:15 
Yeah, so I'm currently involved in a large effort with eight faculty members across four different universities through a grant funded by the National Science Foundation. And the topic is improving cybersecurity, and you know, price security and privacy for marginalized and vulnerable populations. And for the last several years, we've been studying how cameras can be used to improve the privacy of people with visual impairments. I talked about all this work we had with analyzing photos for privacy leaks; but in the end, you're analyzing imagery...doesn't have to be a photo. So you could have a camera feed from an assistive device that a visually impaired person is wearing, and then you could analyze the environment for security, safety, privacy issues.

Debra Farber  32:07 
You're analyzing the physical safety and physical privacy?

Apu Kapadia  32:11 
Yes.

Debra Farber  32:12 
Okay.

Apu Kapadia  32:13 
So you know, you're at an ATM. You're withdrawing money. Is somebody behind you looking and watching you type in your PIN code? For example, there, this is an example of a common issue mentioned. If you're about to...if you're on your computer, looking at your bank account, is somebody watching your screen, right? So, a person with a visual impairment is not necessarily blind, totally blind, they might actually have to enlarge the text on this screen making them more vulnerable to these so-called 'shoulder surfing' attacks where somebody is looking over your shoulder and, and watching watching your screen.

Debra Farber  32:54 
Right, so many people I've seen over the years have used especially in a corporate context, a privacy screen that you just kind of add on to your device that you're not looking at your device head on, and it's just a little askew, it kind of obscures your screen, and no one can, you know... it's supposed to prevent shoulder surfing. I just don't know about compliance with that if people tend to, you know, use them or keep them on because sometimes it can actually be, you know, a distraction when you're trying to get someone to actually look at your screen with you. Right?

Apu Kapadia  33:25 
Yeah.

Debra Farber  33:26 
What are some other examples of how like how you...besides, you know, a physical thing you just put on the screen, how are these cameras alerting a visually-impaired person?

Apu Kapadia  33:37 
Yeah, great question. So we've also started looking at ways to provide feedback to the visually-impaired person through the audio channel. And then, there are interesting questions about...well, what are you supposed to be saying through the audio channel; and how does the algorithm really know that in this context, what is okay, what is not okay? What is it about the environment that we need to summarize and then distill down and give quick short feedback to the person wearing, you know, this assistive device? And so we are looking at ways to modulate the voice. So you can, you don't have to say too much, but based on the intonation, you can convey different kinds of information: the severity of the threat or the confidence that the algorithm has that, you know, that somebody is actually looking over your shoulder or not. Right? It may just have medium confidence, in which case it could sound a little tentative and intonate the feedback in a way that's tentative. So, it's it's such a wide-open space with lots of opportunities to try to understand the best way to provide this information to the person with a visual impairment.

Debra Farber  34:57 
Well, I think it's such a noble cause and too often organizations forget about threat modeling for vulnerable populations. So, I'm really grateful that you're working along with the NSF and these other professors on such a noble cause.

Apu Kapadia  35:13 
Yeah, and then I'll give you another example. We are starting to explore privacy of children. In my own research, looking at privacy of children in the context of what's called 'sharenting.' Sharenting refers to parents sharing photos of their children on social media; and people have lots of, you know, opinions about this. But we're trying to understand the implications of sharenting for the children, the motivations to share it, and what can, you know, can we devise safer ways to share such photos? So there's a lot of work to be done, but just to give you a couple of examples of work that I've been doing with my group on marginalized and vulnerable populations.

Debra Farber  35:59 
Thank you so much for that. Yeah, I definitely want to hear how that work evolves. Okay, we're nearing the end, but I do want to ask you what privacy enhancing technologies you're most excited about?

Apu Kapadia  36:11 
Oh, good question. So some 15 years later, I still continue to be excited about anonymizing networks, such as Tor. These are services where you can bounce your connection around various other computers on the Internet so that when you access a website, the website doesn't know where you are because your connection has gone through, you know, let's say three other computers in the middle that been bounced through including several more computers, routers on the internet routing between these Tor nodes. That concept of anonymizing networks where you bounce your connection around first inspired me to become a researcher in security and privacy - this general concept. And so, it continues to fascinate me today because this service keeps improving; researchers have been working on it; and then the results of the research are quickly integrated into the product. Right? So, the Tor service. So that's really exciting.

Apu Kapadia  37:16 
Maybe many of you have heard of Signal, which is this messaging application that has encrypted communication. So that's exciting to me, because it's one of the few apps where you have a stronger assurance that your conversations are actually private. Right? And, you have to take some additional steps to make sure that's the case, but it's exciting to have this this kind of application that is primarily-driven by privacy, like they're selling privacy. Right? Well, they're not selling. I think it's free. But, my point is that the product is centered around privacy.

Debra Farber  37:58 
Yeah, you know, I feel like it is and it isn't. Right It's centered around confidentiality and enforcing that, which is a security aim, but enforces a privacy, you know, some privacy goals there. I think of anonymization as free speech and anti-tracking. So in this way, you're saying that, "Oh, I can't be tracked," and yes, I feel like that's a privacy-related thing. But, I don't think people tend to use Signal or Tor because they don't want to be tracked from a advertising perspective. It's more, they want to be able to do whatever it is that they're attempting to do without blowback or without the government watching and having a problem with it, or without...you know, or to preserve confidentiality, which is against security aim. So, I think when people select a state of 'anonymity,' it's really more like free speech. I want to be able to say and do whatever I want without blowback to my identity. I'm curious though, I really am like, what is so exciting about anonymized networks to you? I probably should ask that before I gave my like colorful opinion.

Apu Kapadia  39:09 
This is a just...because it's a personal question. I myself am wondering why, why am I so excited by it? I mean, I think this...I just remember as a child watching movies where, you know, a telephone call is being routed around the world and then it's hard to trace back. Right? So this idea of getting privac...adding privacy by jumping around and being hard to trace back always fascinated me. But later on what really fascinated me about these types of systems is that it's very interesting to model mathematically and understand mathematically and there's a lot of technical work that goes into measuring the amount of privacy that it provides - how much privacy is lost in the certain situations. So there's been a lot of work quantifying from an information theoretic perspective, like how much privacy was actually lost in this particular attack by taking these particular measurements. So later on, as a researcher, it just became very exciting to look at the problem more formally and mathematically and seeing the various guarantees that you could make and really understand how much privacy are you getting, or not getting. And now, it continues to keep me engaged is that the, there's like a whole army of researchers - academia, industry, government - you know, trying to find problems with Tor and then fixing those problems, and then those getting integrated. So, it remains exciting as as a researcher as well.

Debra Farber  40:47 
That's a really comprehensive answer and it explains a lot. I always like to understand what like, gets people really excited. So, I really appreciate that. You know, obviously, at first glance for me, you know, I forgot that there's so much math and cryptography involved in it. And so I get why that would be really inviting. So I wasn't going to ask this, but I'm curious, given that anonymized networks are so interesting to you, are you following the crypto space? And I don't want to open up too big of a can of worms - just want to understand, like, one or two points from you about what if you were following that space and the attempts, I guess, to anonymize financial transactions across those networks. Is that something you're following?

Apu Kapadia  41:27 
Yeah, I have to admit that I'm not. So, I've been I've been staying away from cryptocurrencies, personally.

Debra Farber  41:35 
Yeah.

Apu Kapadia  41:35 
So I don't think I can say anything too sensible about it other than I do like the idea of making sure that these transactions are anonymous. So you know, Bitcoin might have become popular, but it's not really providing the types of privacy guarantees that other types of digital currencies offer in terms of anonymous transactions. So, I do believe that anonymous transactions is an important goal. And so there's a lot of cool work to be done from a mathematical cryptographic perspective. And I know lots of great people working on that problem.

Debra Farber  42:12 
Awesome. Yeah, I see a lot talked about on social media on that topic. So, I figured you might know others who are working in that space. Okay. And last question for you, how can we make privacy and security more usable?

Apu Kapadia  42:26 
Oh, yes, this is my area of research, and that's a great question. So...

Debra Farber  42:31 
Oh, I gave you that softball for last.

Apu Kapadia  42:35 
So, I think developers should really start by understanding the user through empirical methods research. By observing studying real users, you might find that your assumptions are often wrong. Right? I gave the example of the privacy warning backfiring. But we know that users even today keep picking weak passwords, whereas the developer just wants them to pick a strong password. Other kinds of warnings, backfire: security warnings, browser warnings, people just like to say, "Okay, fine, I just want to go there anyway." And so to actually build systems to be secure, factoring in the user is not easy. At the same time, you shouldn't blame the user, right? You have to consider them as an important part of the system, not some some entity that uses your system, you have to think of the users as being part of your system. And like you know, it's a first class component, or user human. It's a first class piece of the system, and then in the end, you will then build systems that work for for real people (your actual users) and not the ones that you imagine. Typically, people maybe imagine themselves.

Debra Farber  43:50 
Thank you so much for that. Apu. Thank you for joining us today on Shifting Privacy Left to discuss your privacy and security research.

Apu Kapadia  43:58 
Thank you for having me. This was wonderful.

Debra Farber  44:01 
Oh, I'm so glad you had a good time. Thanks for joining us today, everyone until Tuesday when we'll be back with engaging content and another great guest.

Debra Farber  44:12 
Thanks for joining us this week on Shifting Privacy Left. Make sure to visit our website: shiftingprivacyleft.com where you can subscribe to updates so you'll never miss a show. While you're at it, if you found this episode valuable, go ahead and share it with a friend; and if you're an engineer who cares passionately about privacy, check out Privado: the developer-friendly privacy platform and sponsor of this show. To learn more, go to Privado.ai. Be sure to tune in next Tuesday for a new episode. Bye for now.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

The AI Fundamentalists Artwork

The AI Fundamentalists

Dr. Andrew Clark & Sid Mangalik
She Said Privacy/He Said Security Artwork

She Said Privacy/He Said Security

Jodi and Justin Daniels
Privacy Abbreviated Artwork

Privacy Abbreviated

BBB National Programs
Data Mesh Radio Artwork

Data Mesh Radio

Data as a Product Podcast Network
Luiza's Podcast Artwork

Luiza's Podcast

Luiza Jarovsky