The Shifting Privacy Left Podcast

S3E10: 'How a Privacy Engineering Center of Excellence Shifts Privacy Left' with Aaron Weller (HP)

Debra J. Farber / Aaron Weller Season 3 Episode 10

In this episode, I sat down with Aaron Weller, the Leader of HP's Privacy Engineering Center of Excellence (CoE), focused on providing technical solutions for privacy engineering across HP's global operations. Throughout our conversation, we discuss: what motivated HP's leadership to stand up a CoE for Privacy Engineering; Aaron's approach to staffing the CoE; how a CoE's can shift privacy left in a large, matrixed organization like HP's; and, how to leverage the CoE to proactively manage privacy risk.

Aaron emphasizes the importance of understanding an organization's strategy when creating a CoE and shares his methods for gathering data to inform the center's roadmap and team building. He also highlights the great impact that a Center of Excellence can offer and gives advice for implementing one in your organization. We touch on the main challenges in privacy engineering today and the value of designing user-friendly privacy experiences. In addition, Aaron provides his perspective on selecting the right combination of Privacy Enhancing Technologies (PETs) for anonymity, how to go about implementing PETs, and the role that AI governance plays in his work. 

Topics Covered: 

  • Aaron’s deep privacy and consulting background and how he ended up leading HP's Privacy Engineering Center of Excellence 
  • The definition of a "Center of Excellence" (CoE) and how a Privacy Engineering CoE can drive value for an organization and shift privacy left
  • What motivates a company like HP to launch a CoE for Privacy Engineering and what it's reporting line should be
  • Aaron's approach to creating a Privacy Engineering CoE roadmap; his strategy for staffing this CoE; and the skills & abilities that he sought
  • How HP's Privacy Engineering CoE works with the business to advise on, and select, the right PETs for each business use case
  • Why it's essential to know the privacy guarantees that your organization wants to assert before selecting the right PETs to get you there
  • Lessons Learned from setting up a Privacy Engineering CoE and how to get executive sponsorship
  • The amount of time that Privacy teams have had to work on AI issues over the past year, and advice on preventing burnout
  • Aaron's hypothesis about the value of getting an early handle on governance over the adoption of innovative technologies
  • The importance of being open to continuous learning in the field of privacy engineering 

Guest Info: 

Send us a text



Privado.ai
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.

TRU Staffing Partners
Top privacy talent - when you need it, where you need it.

Shifting Privacy Left Media
Where privacy engineers gather, share, & learn

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Copyright © 2022 - 2024 Principled LLC. All rights reserved.

Aaron Weller:

It's the ability to understand what the company is doing strategically so you can kind of get ahead of it, is really important. And, also being close enough to the engineering teams that you're not seen as being somebody on the outside who's imposing things, but really somebody who's helping to solve the problems that they are facing.

Debra J Farber:

Hello, I am Debra J Farber. Welcome to The Shifting Privacy Left Podcast, where we talk about embedding privacy by design and default into the engineering function to prevent privacy harms to humans, and to prevent dystopia. Each week, we'll bring you unique discussions with global privacy technologists and innovators working at the bleeding- edge of privacy research and emerging technologies, standards, business models, and ecosystems. Welcome everyone to The Shifting Privacy Left Podcast. I'm your host and resident privacy guru, Debra J Farber. Today, I'm delighted to welcome my next guest, Aaron Weller. He is the leader of HP's Privacy Engineering Center of Excellence, where he provides technical leadership and solutions for privacy engineering, enablement, and experience for HP's global operations. Aaron has 20 years of global consulting experience in security and privacy, including the founding of his own consulting firm, Ethos Privacy. He's served in executive roles like CISO and CPO, and he also sits on IAPP's Privacy Engineering Advisory Board. I'm really excited to have Aaron on as a guest today, as I know he has so much wisdom to share with you all. Welcome, Aaron.

Aaron Weller:

Hey, Debra, glad to be here on this sunny Friday.

Debra J Farber:

Yeah, it's pretty sunny here, too. We're both in the Pacific Northwest. I'm here in Southern Washington and, yes, there is some sun; so, it feels like spring actually is finally here. The main topic of today is really to talk about the Privacy Engineering Center of Excellence. So, before we dive into that, you've got such deep consulting experience in both privacy and security. Maybe you could tell us a little bit about your background and how you ended up where you are at HP today running the Center of Excellence.

Aaron Weller:

Sure. I tend to think of privacy as my third career. I started off with audit and assurance, realized fairly early on that that was not really where my passion was and transitioned into information security around, I think, 1999, which seems like a long time ago and I guess it was. But, back when a lot of the information security controls - a lot of it was manual, scripting a lot of things ourselves - I ran teams of ethical hackers, which was interesting. I did some forensic work as well.

Aaron Weller:

I lived in London; I lived in Australia; and then, I moved out to Silicon Valley. When I was working in Silicon Valley in a CISO role (or an interim CISO role) one of the companies that I was working with had experienced a privacy incident, and I realized fairly quickly that I needed to know more about privacy. We were relying on kind of outside counsel support. I realized I needed to know more about it myself and took my CIPP, I think in 2007 / 2008; and, I've really kind of transitioned into privacy full time since then. I ended up at HP because HP was actually a client of mine when I was running Ethos; and, when I ended up selling the company, I reached out to a few people, including my friend Zoe - who's the former Chief Privacy Officer at HP - and said, h"Hey, I'm looking for something interesting to do next. And that's kind of how I ended up here. The role that they'd been looking for someone to fill, and what I was looking to do next, matched up really well; and, I joined HP in December '22.

Debra J Farber:

That's awesome. Coincidentally, a lot of those roles are actually in Vancouver, Washington, which is where I live, so I just love it. Every time I pass by the HP Center, I always think of you. What exactly is a Center of Excellence, and how does a Center of Excellence in privacy engineering drive value for an organization and enable it to shift privacy left?

Aaron Weller:

So, the Center of Excellence idea certainly not unique to privacy. It's been around for a few years. One of the earliest examples - I'm not sure if it was the first one - was actually at NASA and what they were looking to do was to have a team of people who were abstracted from the day- to- day and could really think about building for the future. So, the Center of Excellence at HP has a number of different Centers of Excellence in different domains. What my team does is really look at what are the controls, the techniques, the guidance, the training, the things we can put in place that will raise the bar across the whole of the organization. And, we do have Privacy Engineers that work specifically within certain teams, but they are working on those day-to-day privacy engineering things.

Aaron Weller:

I think a lot of the folks who are in a career in privacy engineering, they're working in that stuff. Right? They're solving real problems on a day-to-day basis; whereas, I have (and I do think of it as a luxury), my team is a little bit removed from that. We can then say, "Where do we want privacy engineering to be at HP in a year, in two years, in three years? And then how do we kind of build those capabilities to get us there in a way that enables everybody else to come along with that as well. So half of the people on my team are PhDs who've got really deep backgrounds in privacy, in technology, in cryptography, interface design - a lot of these things where it's hard for any individual team to have someone with that expertise unless they are kind of centrally located and can then serve a number of different groups across the organization.

Debra J Farber:

Well, that makes sense. So, definitely a place where you could have a lot of key experts that can be almost like a consulting to other business lines.

Aaron Weller:

Exactly. That's a good way of looking at it. Yeah, we definitely get brought these interesting problems from across the business and then look for a way to solve not just that problem but really the class of problems that it belongs to.

Debra J Farber:

That's fascinating. So, what motivates leaders generally to create a Privacy Engineering Center of Excellence? If you feel like you can, it'd be great to just frame it in terms of what motivated HP to create such an organization.

Aaron Weller:

Yeah, so the discussions around creating. . . so, I was the first person into the team, so the discussions were happening before I was hired; but, some of the drivers were really going back to what I was saying: how do they get ahead of some of the emerging challenges in the field of privacy engineering and not have people who are just going to get dragged into the operational day-to-day pieces? So, I think that was part of the motivation. The Privacy team had been around for quite a while, good capabilities in many areas of privacy, but privacy engineering was not being run at that, what we call "pan-HP," so kind of that global level. So there was a gap there that was identified and the team, and the role that I took on, was designed to help address that gap.

Debra J Farber:

That makes a lot of sense. So then, what kind of reporting line works well for a Privacy Engineering Center of Excellence? Is it under a Chief Privacy Officer's purview? Is it Head of Engineering? Something else?

Aaron Weller:

You know, there's good arguments and I've had this discussion with a few of my peers. There's good arguments, I think, in different directions; and what I look at it as being, it's the ability to understand what the company is doing strategically so you can kind of get ahead of. It is really important. And, also being close enough to the engineering teams that you're not seen as being somebody on the outside who's imposing things, but really somebody who's helping to solve the problems that they are facing. It could be seen as tension, depending on kind of.

Aaron Weller:

There's a lot of research that shows that where a team actually reports into gives a lot of flavor of kind of how that team will operate. But, I do think that there are. . . I mean, I have close relationships with the Chief Privacy Officer; the VP, who runs data governance kind of in the Chief Data Officers team, I work very closely with them; the folks who run our AI Pipelines; and then, the Engineering teams and the broader Privacy teams across the rest of the organization. So, I'm very much a matrixed person and on a matrixed team because I think you need to have all of those relationships. So, the reporting line to an extent it's important, but I think it's more important to have this broad network and just be tied into what's going on so that when I get these questions, I've already been thinking about a lot of these things rather than trying to have to do things off the cuff.

Debra J Farber:

That makes a lot of sense. I really like the tie-in with strategy of the organization, like corporate- level strategy so that you're aligned. I mean, so much of the time, privacy is looked at as a compliance exercise and the rest of the business isn't even paying attention. It's more of like just go fix the problem, fix the bug - the privacy bug, right, without really understanding how wicked a problem privacy can be. So, that's pretty cool that, it sounds to me, like a key component of the Center of Excellence is really aligning the corporate strategy. [Aaron: Yeah, yeah, for sure.] Okay, so you're the first employee on the Privacy Engineering Center of Excellence team. How did you go about creating a roadmap for the Center of Excellence and then your strategy around building out your team?

Aaron Weller:

So, the first thing I did was I went on a listening tour and I talked to probably 40 or 50 people across the organization: executives, people who are working with privacy, people in teams like the marketing team who have a great reliance on being able to use data for those purposes; and, I went through and I asked them a set of questions around "how do you interact with privacy today? What are some of the main challenges you're facing? What do you think that me and my team would be able to do to support and help you achieve your goals? That alignment again, and what came out of that exercise was, "here are a list of issues. These are the ones that seem to keep coming up over and over again that I think my team could address, and from that I built out a roadmap and also a hiring plan, to say, "based on what we're seeing,

Aaron Weller:

these are the kinds of things that I think are important." It's interesting, because when I was hired, when the job description was first put out, there was a lot of focus on engineering specifically; and, what I realized through that listening tour was that there was also a need for more education and enablement of these teams. Like, how does Privacy Engineering help. How can we actually transform some of these data sets in a way that we can get more value out of the data you've already collected, which I think is very different from kind of that compliance lens that you were just talking about. Then, one of the other areas that I saw was really in kind of what I'm looking at as 'Privacy Experience Design.' How do we design great experiences that are good for users? You know, to be transparent and all those other things, and they're also telling part of our privacy story. So, as I kind of had those conversations, I said, "Look, I think we need to broaden the mandate of what this COE should be doing, because without some of these other elements, I worry that we'll be building a lot of good backend processes, but we'll never really have that connection back to "How do people even know that these things exist and will it actually move the needle from an external perspective as well?"

Debra J Farber:

That makes a lot of sense. So then, what kind of skills and abilities were you looking for when building out your team? I imagine it will be different for every organization, but I would think for a Privacy Engineering Center of Excellence, there might be some guidance based on what you're doing, or how you staffed your team, that others could probably learn a lot and deploy something similar.

Aaron Weller:

Sure, yeah, I mean, my first hire was actually a Privacy Architect. I wanted somebody who could really help to address some of these strategic problems and then building in kind of engineers under that kind of pillar of the COE to help them work on actually solving those problems directly. So, I built out that pillar kind of with an Architect and then hired a Privacy Engineer into that role. And, I think you and I have chatted previously about how I am not a fan of the phrase "Privacy engineer when it comes to hiring because it means so many different things to different people. What I really wanted and what I ended up hiring was a pet engineer, so somebody who can really go and build privacy enhancing technologies.

Aaron Weller:

It took me a while to find somebody. We made a great hire, and we've really focused on the PETs side of what we need to do. I also hired somebody to help run that enablement side of things, too. So, looking at how do we really get the right resources into the hands of thousands of engineers across the whole company, looking at both from a resourcing perspective, communication channels, building and engaging with an audience across the organization. So, almost like we don't quite run podcasts internally. But, what are those various channels? Where do people congregate to ask questions and where may they be asking privacy questions that we just don't have visibility into unless we're in those communities working alongside them?

Debra J Farber:

That makes a lot of sense, and if you do ever decide to run a podcast internally, please do reach out. So, staying on the topic of Privacy Enhancing Technologies, it sounds like you definitely have a person that helps create them and implement them. I was going to ask, does your team determine when and under what conditions different business lines can use a particular PET, or is it more like it's guidance created for just educating on the benefits of PETs and when you might want to use them; and then, the business line makes the determination?

Aaron Weller:

We have broken it down really into what I'm calling kind of "type one" and type two problems. Type one problems are where we actually want to achieve anonymity against a legal standard. So, particularly if we're looking at kind of a legal standard in Europe around GDPR or other emerging standards around "this data set is anonymous, therefore it's no longer personal data right. So that's kind of a type one problem where we have to work closely with legal folks and work out what can we do to this data set to achieve a level that we can say confidently this data set is anonymous. The other one is more around. So a type 2 problem is more around a spectrum of de-identification. So, we can reduce risk in this data set. We're not going to go as far as actually being able to say that it's truly anonymous. So, it could be using something like pseudonymization techniques, maybe aggregation; although, if you aggregate enough and we could consider it anonymous as well. So, we're looking at kind of building out effectively a decision tree of what are the kinds of business problems that we're looking at. Like, "We want to use this data and we don't have consent for it." Okay, so in that case we'd need to anonymize it before you could use it. Or we're looking at this data where we just don't. We don't think we need to. . .we need to strip out the personal information because that's not really important to the outcome." So, looking at some of those redaction techniques, or we really need to manipulate the personal information, but we don't then want that to appear in the output.

Aaron Weller:

So, there are a lot of different things that we're looking at, which are "What are those business problems?

Aaron Weller:

How do we break them down into classes?" classes?

Aaron Weller:

And then, h"ow do we look at all of the PETs that are available, and the availability may be that it exists in a research paper or it may be that there are commercial off-the-shelf solutions for it. Then, how do we work out which are the ones where we can really get the most ROI for the organization.

Aaron Weller:

Either they're broadly applicable or there's a specific use case where being able to manipulate this data using a PET is going to open up and unlock things that previously weren't able to be done with that data. That's kind of the way we look at it. It's interesting because it means that we're always looking at it through that lens of, "Okay, what's the latest problem or latest solution or, I guess, new initiative that the organization is looking to do and how can we enhance that using a pet, as opposed to just looking again from that compliance side right, Because we couldn't possibly anonymize all the data in the organization. That doesn't make any sense. So, it's really trying to match up those use cases with what the problem is or what the benefit could be that we would achieve from implementing that PET or a combination of PETs, because we've found that often the combination is the one that actually gets the job done.

Debra J Farber:

Yeah, that makes a lot of sense. You know, I like what you said there too, because it follows the mantra I've always been saying around PETs that you really kind of want to work backwards from whatever the specific outcome that you want to guarantee. So, what are the privacy guarantees that you want to say - so, whether it's flowing from the privacy notice, it states that you're guaranteeing a certain level of privacy. If you say you anonymize things, you want to actually be able to ensure that, or whatever legal certification that you're aiming for, you then can work backwards on what privacy enhancing technology can get us there, as opposed to going o"Oh I really like this particular pet, let's deploy it here, and it maybe doesn't make sense for the privacy guarantee that it produces or does not produce.

Aaron Weller:

Yeah, and I love that you brought up privacy guarantees, because that's something else that we've been noodling on as well. I see a spectrum from kind of what I would see as "hard guarantees, which are really kind of mathematical proofs. Right? Like differential privacy can give you a mathematical guarantee to a certain level, depending on the values you choose for Epsilon and the rest of it, the implementation details.

Aaron Weller:

But, you've also got kind of what I think of as "'softer guarantees, things like "I will not sell your data, where you've then got to do a lot of work to go from "Okay, what does that really mean? How can I prove it and what are the steps along that to be able to say, if we made this statement, that we can actually provide assurance that it's true? So yeah, we're working on some of that stuff where a lot of PETs don't really come with built-in privacy guarantees. So, that's some of the work that my cryptographers are working on. How do we almost advance the state- of- the- art with some of this and say, What are the privacy guarantees we can provide from something like multi-party computation, and can we do it in a way that would be at least broadly comparable to other techniques?" Because without that comparability, it's really hard to know what are you really getting from this technique from a privacy reduction perspective.

Debra J Farber:

Yeah, that makes a lot of sense. One of the things that privacy folks have not had for so many years is KPIs and metrics that really move the needle Like it's. . .We used to have breaches and the number of them and all of that, but that's kind of moved into the security world; and so, here, by providing what your team is working on, and if we can get more of a collaboration on that and get that more at a community level, maybe standardize it, I think that would go a long way in an organization. You know, organizations that haven't done this work but want to be able to like benefit from all the knowledge out there about how to go about deploying PETs.

Aaron Weller:

I do see that as part of our mandate to what you were just saying about being able to share some of this knowledge as well.

Aaron Weller:

I'm looking for those opportunities where we can help to share some of the work that we've done and say that this is something that I think others could use as well and could benefit from.

Aaron Weller:

Nobody is an island. We certainly look to others and are inspired by what other people are doing. I heard someone say the other day that really stuck with me. They said "the way to innovate fast is to make the borders of your organization porous." You're letting ideas in, but you're also, importantly, letting ideas out as well, so that you can really be seen to be somebody that others want to collaborate with, rather than just taking everybody else's ideas. So, I do think that's a really important part, and that's one of the reasons you mentioned earlier that I'm on the IAPP Engineering Advisory Board for the next couple of years. That's one of the things I really want to do is to help influence. How do we help with some of the resources that the IAPP and the involved organizations have available to advance that state- of- the- art in a way that everyone can benefit, who maybe don't have the resources to set up a team similar to mine.

Debra J Farber:

Yeah, that would be great, especially for other privacy leaders. IAPP is a great org. I do wonder what level of influence they have in the engineering space, but I applaud the effort and I think that you've got a great Advisory Board, a great panel of other awesome privacy engineers to work with. So, I look forward to seeing the output of you guys putting your heads together. What advice would you have to leaders in other organizations about setting up a similar style Center of Excellence for Privacy Engineering? Basically, what are some of your lessons learned?

Aaron Weller:

Oh, that's a good question. I think that it's, like I mentioned, HP has a number of different Centers of Excellence. I think that that's kind of part of the culture is that being able to have people who are focused on driving the state- of- the- art. I joked with my team before that we are not a Center of Mediocrity, we are a Center of Excellence. So, there are those certain expectations that I have around, "We should be doing stuff that's at least aspirational in some ways, but also we can get done." So I think it's making sure that your organization is going to have a culture that's going to accept that Center of Excellence idea would be important.

Aaron Weller:

I found that the listening tool that I was mentioning earlier, that I did when I started, was critical both to building and starting to build those relationships across the organization, but also not coming in as a 20-plus year consultant earlier in my career. I think, probably a few people, when they looked at me, were like y"You're going to come in and not understand what we need and then just tell us what to do. "I think helping to really shape that narrative by saying, "I'm just here to listen and I'm here to really understand what the problems are." and problems are and then come back to you with these are the ones that I think I can reasonably address and these are the ones that I think would be best for the organization, even if they're not best for you as an individual stakeholder. Hopefully we can align that these are the best problems for my team to be addressing. So, I think that was helpful.

Aaron Weller:

And then, really, it's being very picky with the hiring as well. I think there are a lot of people I mentioned that it's really hard to hire a Privacy Engineer because that definition is so broad and the resumes were so varied that I think really knowing what I wanted and be able to say, This is a very particular set of skills that I'm looking for." And in that case, I think it took me six months to make that hire; but, it was something where I was fighting that tension of t"There's work that needs to be done now" with really holding on until I had the right person in the seat. So that's challenging when you're trying to balance those two objectives.

Aaron Weller:

But, I think, yeah, really, if you're looking for a team that's going to be really at the cutting- edge of helping to drive the organizational direction in this space, it takes a certain mindset and ability to be comfortable with ambiguity, you know, and some of these other things where we are literally building the plane as we're flying it. Not everybody wants to do that. So, I think that was a key part of the interview process as well. You've really got to push home the idea that this is going to be exciting, but sometimes it'll be Monday and your entire week gets turned upside down. If that's going to upset you, this is probably not the job for you.

Debra J Farber:

Yeah, that's fascinating and that's all with a company that has definitely invested in the concept of a Center of Excellence. It's already in its DNA, in the culture. What about for companies that. . . or maybe a Business Leader or a Privacy Engineering Leader that wants to ask leadership whether they would invest in a Center of Excellence? I mean, I know you didn't necessarily have to do that. What advice would you have to them to make the business case?

Aaron Weller:

I think it's really the couple of things that I think that the COE can deliver

Aaron Weller:

that would be very hard for somebody who's in a more of an operational privacy engineering role to do is really helping to spend the time to invest.

Aaron Weller:

AI is a great example this year where we have built a Privacy / AI process, and triage, and separate assessment processes because we have the bandwidth where I could prioritize people to say, w"We didn't plan to get this work done this year, but it's clearly a business priority and we can go and focus on it because we're that little bit removed from some of the day-to-day operational stuff. I think that's some of the value in having . . . it's almost like. . . I was watching Braveheart the other day, where they keep part of the army in reserve because when things change on the battlefield, it's great to be able to say, "Okay, now that's changed, we can go and adapt because we didn't deploy everyone all at once".

Aaron Weller:

That's kind of how I feel the CoE can really be useful is to say, "We don't have to spend: either we go hire a consultant and we need to train them on the business, or we're going to spend three to six months going and hiring somebody." We already have people here who know the organization, know some of the problems and the way that it works; and we can apply them to this problem. That's not to imply my team is sitting around waiting for new problems. Right? We have a huge backlog of stuff to do; but, it does mean that we can prioritize when things come up.

Debra J Farber:

It makes a lot of sense. I really like that analogy. All right, let's talk a little bit about AI governance. I know you brought it up. I know you said your team manages. . .or not manages, but is in charge of at least looking at some AI pipelines from a risk perspective and maybe recommending some controls there. There's been a lot of hype around AI, right? Especially in the last year, with LLMs coming to market all over the place, and many privacy teams have been asked to advise on AI governance risks as well, which was not necessarily part of those teams' privacy mandate and thus might be eating into the amount of time spent on privacy engineering and technology. I was going to ask what's the percentage of time, but really just an estimate that your team might have spent on activities outside of privacy engineering, like AI governance? And then, how can leaders help prevent burnout on their teams for adding these additional projects and requirements? I mean, I guess that's basically my question.

Aaron Weller:

Yeah, I mean, I guess when you say that you know you've spent on activities outside of privacy engineering, what I've really looked to do - and I think we've been fairly successful with this - is what are the things that we can do that apply to AI

Aaron Weller:

that also, then, we can turn around and apply to other use cases as well. A good example of that is some of the work we're doing at the moment around synthetic data generation and use, which is great for testing AI models, but also we can use it for other parts of the organization as well. So, it's trying to find the places where we're not doing something that's just AI specific, unless we have to; but, but really looking at how do we integrate that with the rest of the existing roadmap. So where would we apply PETs pets to AI? Where would we apply some of these other techniques to AI as well? But, yeah, I mean we have had to spend some time building, you know, AI-specific ai-specific triage processes for privacy and some of those things that we built from scratch. Now, we have them up and running and they're becoming more efficient. You know, we can really focus back onto some of the other challenges, too.

Debra J Farber:

While some of the great aspects of AI will continue to move forward, hopefully some of the hype that was a little unwarranted will continue to die down.

Aaron Weller:

I mean, I've seen a lot of really intriguing AI use cases and some things where you look at it and you say, "That is demonstrably better than the way we were doing things previously. I've also seen some where I'm like "You know, we could have done the same thing a different way"y. So I think there is that kind of, at the moment, everybody's all- in on AI; and, I remember a few years ago with what I called "peak blockchain"n, where I was literally sitting on a beach in Sayulita, Mexico, and a woman was talking to a venture capitalist at the next table trying to sell her blockchain idea to them. And, I'm like "This is peak blockchain," right when it's just. It's so pervasive that you just cannot get away from it, and I feel that a little bit about AI today.

Debra J Farber:

Yeah, that makes sense. I feel like it's definitely the new blockchain or metaverse even. I feel like nobody's talking about that anymore, compared to AI. I also recently saw a LinkedIn post of yours where you were discussing the value of getting an early handle on governance over the adoption of innovative technologies. And you say "Think AI today, but before that, blockchain, cloud, mobile, et cetera"a and you made a hypothesis about effective governance. Would you share that hypothesis with the audience?

Aaron Weller:

This actually came out of, if you're familiar with it, the Gartner Hype Cycle, where you've kind of got the peak of inflated expectations (as I think you're referring to) and then the "trough of disillusionment and then it kind of levels out to "It's just a technology we use right. And you can think the cloud now it's just a technology we use. Right. There are additional risks and controls, but we've got processes and standards and all the rest of it. That it's not something - and I do remember several years ago people would be like, "Are we ever going to move to the cloud or is it just too risky? And now you don't hear that question being asked. So, my hypothesis, kind of similar to the Hype Cycle, was you've got organizations you try and be often - and I've seen this in many organizations over the years - when there's a new technology, there is this FOMO right, the fear of missing out. So, people really try and lean in hard to these new technologies. And, not to say that there is no value in blockchain. There are definitely things where, particularly around supply chain management and be able to validate that things are the way that you believe them to be, there are some great use cases. But, I think it overextended itself into use cases that really didn't line up with the core value of the technology. So I think that there is kind of this tendency to over index and many governance teams, in my experience over the years, are maybe slow to react. Right? There's a lot of stuff going on and AI in particular is moving really fast. How do you react quickly enough and then avoid kind of that over-correction where you're like this is just out of control and we saw this again?

Aaron Weller:

I'm using a cloud analogy for a couple of reasons. One, to show that this is not a new thing, but also because we've kind of got through the cycle when we were looking at firewalls back in the day and every different service would run over a different port. We would set up the firewall rules based on where we knew the traffic was going. With the adoption of cloud, everything ran over port 80 or port 443. You've suddenly lost that control you had before and we had a whole new range of solutions around understanding which cloud providers were even existing in your environment, or SaaS solutions, because they all ran over the same port. So, we had to kind of have a different technique and I think then there was the tendency to say, "Well, no new cloud", right, we've got to impose these processes, we've got to make sure it's all, which was probably an overcorrection.

Aaron Weller:

My hypothesis is, I think that effective governance, if you can get in there quickly, you can reduce the level of risk that's accepted before companies really understand the new technology. So, you're accepting risk because you want to get it done, but at the same time you may not really understand or be able to quantify what that risk is. I think then when companies realize that sometimes they're like "ah and panic and then reduce the test, they overcorrect and the compliance organization's like we've had way too much risk. We need to then restrict this list, get it back down to an equilibrium and they overcorrect and then eventually you'll settle on a this wasn't as bad as we thought it was. We understand the risks better now. We have more standards and frameworks and things we can rely on, so you reach this equilibrium.

Aaron Weller:

I believe that if you have a good governance team - I'm particularly going back to my Braveheart analogy - if you've got people that can jump on these kinds of things quickly and not have to wait for a break in their day job to do it, I think both you can reduce that level of risk that's accepted and then reduce this overcorrection that I think organizations tend to do when they realize that they were slow to govern in the first place. Therefore, they've got to kind of go back and work out. You know, how do we get back to that, whatever the long-term usage of AI is going to be, or cloud or any of these other things?

Debra J Farber:

I like your hypothesis and I wish that the Gartners of the world focused more on privacy as a separate industry and not just a subset of security, which it never is. Privacy is not a subset of security, but that's kind of how it's treated at Gartner and Forrester and all the analyst firms. Maybe they're listening and they can hear about your hypothesis like this and then maybe that inspires them to expand into privacy more. We're getting close to the end of the conversation and I'd like to ask you - what advice do you have for those listeners interested in getting into privacy engineering?

Aaron Weller:

I think there's a lot of. . . and I've said this throughout my career - so, two things. One is that the job that you do in 10 years' time may not even exist today. Privacy engineering as a discipline, a separate domain, didn't really exist outside of a few companies 10 years ago. I think, with all of these different areas, people are like h"How do I get into this new area? Well, the easiest way to get in is to have an adjacent skill set. I would much rather hire an engineer that is a good engineer and I can teach them privacy, somebody who's got kind of the right attitude and some of the good background, than try and find kind of a unicorn who has everything. So I would say if you're looking at getting into privacy engineering and you have a good engineering background, you can get enough privacy, I think, to be able to be effective in those roles. But often that may be a horizontal transfer within an existing organization. One of the best security guys that I ever had work for me was given to me from the IT help Desk. He knew everybody in the organization; knew how the processes worked; had an interest in security; and, was able to be trained up and receptive to being trained up in some of the details. So, I think that getting into privacy engineering, it's not like you have to go to Carnegie Mellon or one of the other places where you can actually do a full-on course. I think it's really understanding.

Aaron Weller:

Why do you want to get into privacy engineering? What is the piece about that that really intrigues or interests you? How will that be the next step in your career and what are the skills that you already have to be able to then make that move? You look at a bunch of the stuff with AI right now, where so many people are claiming to be an expert, and you go and look and say, w"ell, how can they claim to be an expert? A lot of the people that I know who've been very successful, they're lifelong learners and they always, if you get an opportunity to talk to someone, they can talk intelligently about different kinds of privacy enhancing technology or different kinds of things that would be relevant to the role. So, I think, absolutely, there's a lot of stuff that people can do to know enough to have that foundation. But yeah, I've always been successful in recruiting from adjacent domains where maybe there isn't the hype around it and there are people who've got those good baseline knowledges that can be really effective in a privacy engineering role, as long as they're open to that continued learning.

Debra J Farber:

I think that's great advice. Thanks for that. I mean, you just dropped so much wisdom. Do you have any other words of wisdom that you'd like to leave the audience with today, or any upcoming conversations that you want to plug, or any frameworks and working groups that you think people should know about? Let us know.

Aaron Weller:

Yeah, I mean, I think there is an overabundance, probably, of information out there about the world of privacy engineering and I am still discovering new things all the time. The latest one that I think I found a couple of weeks ago was some work that OWASP, who are famous for their Top 10 security vulnerabilities, has done, and they now have a Top 10 AI vulnerabilities and a Top 10 machine learning vulnerabilities. They've broken down controls frameworks around the various stages of an AI ingestion pipeline. So, I read all of these things and I think what I've been successful at doing is then synthesizing a lot of these things together to say, "I can take a little bit of X, a little bit of Y and work out something that's going to be effective within the organization that I work in. So I think my words of wisdom are there is more information than you could possibly consume out on the internet. There are so many working groups that are doing good work around this space and I think, yeah, we've mentioned LinkedIn earlier, but so many people share papers and things they're working on and all of that stuff on LinkedIn.

Aaron Weller:

I had someone reach out to me this morning and say, h"ey, I just published this, what do you think about it? Use those resources because, to my point about the way you innovate faster is that porosity of the border. I've always believed that anything that I do, somebody else could help me improve. I think, if you have that approach to it and you say, I"I want to read ISO and I want to read NIST and I want to read all of this other stuff I may not necessarily agree with all of one perspective, but by reading all of it, I then can produce something or synthesize something that's going to be exactly in line with kind of what I want, without having to go and build it from scratch. So I think my words of wisdom are if you're building something from scratch, you are either right on the bleeding edge or you are not looking hard enough for something that you can leverage.

Debra J Farber:

Those are excellent words of wisdom. Thank you for that, Aaron. Thank you so much for joining us today on The Shifting Privacy Left podcast. Until next Tuesday, everyone, when we'll be back with engaging content and another great guest or guests. Thanks for joining us this week on Shifting Privacy Left. Make sure to visit our website, shiftingprivacyleft. com, where you can subscribe to updates so you 'll never miss a show. While you're at it, if you've found this episode valuable, go ahead and share it with a friend. And, if you're an engineer who cares passionately about privacy, check out Privado: the developer-friendly privacy platform and sponsor of this show. To learn more, go to privado. ai. Be sure to tune in next Tuesday for a new episode. Bye for now.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

The AI Fundamentalists Artwork

The AI Fundamentalists

Dr. Andrew Clark & Sid Mangalik
She Said Privacy/He Said Security Artwork

She Said Privacy/He Said Security

Jodi and Justin Daniels
Privacy Abbreviated Artwork

Privacy Abbreviated

BBB National Programs
Data Mesh Radio Artwork

Data Mesh Radio

Data as a Product Podcast Network
Luiza's Podcast Artwork

Luiza's Podcast

Luiza Jarovsky