The Shifting Privacy Left Podcast

S2E39: 'Contextual Responsive Intelligence & Data Minimization for AI Training & Testing' with Kevin Killens (AHvos)

Debra J. Farber / Kevin Killens Season 2 Episode 39

My guest this week is Kevin Killens, CEO of AHvos, a technology service that provides AI solutions for data-heavy businesses using a proprietary technology called Contextually Responsive Intelligence (CRI), which can act upon a business's private data and produce results without storing that data.

In this episode, we delve into this technology and learn more from Kevin about: his transition from serving in the Navy to founding an AI-focused company; AHvos’ architectural approach in support of data minimization and reduced attack surface; AHvos' CRI technology and its ability to provide accurate answers based on private data sets; and how AHvos’ Data Crucible product helps AI teams to identify and correct inaccurate dataset labels.  

Topics Covered:

  • Kevin’s origin story, from serving in the Navy to founding AHvos
  • How Kevin thinks about privacy and the architectural approach he took when building AHvos
  • The challenges of processing personal data, 'security for privacy,' and the applicability of the GDPR when using AHvos
  • Kevin explains the benefits of Contextually Responsive Intelligence (CRI): which abstracts out raw data to protect privacy; finds & creates relevant data in response to a query; and identifies & corrects inaccurate dataset labels
  • How human-created algorithms and oversight influence AI parameters and model bias; and, why transparency is so important
  • How customer data is ingested into models via AHvos
  • Why it is important to remove bias from Testing Data, not only Training Data; and, how AHvos ensures accuracy 
  • How AHvos' Data Crucible identifies & corrects inaccurate data set labels
  • Kevin's advice for privacy engineers as they tackle AI challenges in their own organizations
  • The impact of technical debt on companies and the importance of building slowly & correctly rather than racing to market with insecure and biased AI models
  • The importance of baking security and privacy into your minimum viable product (MVP), even for products that are still in 'beta' 

Guest Info:

Send us a text



Privado.ai
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.

Shifting Privacy Left Media
Where privacy engineers gather, share, & learn

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Copyright © 2022 - 2024 Principled LLC. All rights reserved.

Kevin Killens:

I think that it's very important that, as the world grows increasingly more connected, that the connection is isolated to what we give access to, not what we put online. I think there's a big difference between what we actually agreed to give people access to and what they actually access. I'll give you a perfect example: if you look at the EULAs - the End User License Agreements - no one ever reads those. They're so long and obfuscated and companies do that on purpose so that you don't read them.

Debra J Farber:

Hello, I am Debra J Farber. Welcome to The Shifting Privacy Left Podcast, where we talk about embedding privacy by design and default into the engineering function to prevent privacy harms to humans, and to prevent dystopia. Each week, we'll bring you unique discussions with global privacy technologists and innovators working at the bleeding- edge of privacy research and emerging technologies, standards, business models, and ecosystems. Welcome everyone to Shifting Privacy Left. I'm your Host and resident privacy guru, Debra J Farber. Today, I'm delighted to welcome my next guest, Kevin Killins, CEO of AHvos.

Debra J Farber:

AHvos specializes in AI solutions for data- heavy businesses. In a strategic partnership with Trinsic Technologies, AHvos offers exclusive cloud hosting for its private AI engines, allowing businesses to have their own dedicated AI engines powered by their own data. According to Kevin, what sets AHvos apart is its proprietary technology called 'Contextually Responsive Intelligence,' or CRI. Unlike traditional AI models that rely on 'Large Language Models' (or LLMs) and heavy neural networks, AHvos CRI technology can act upon a business's private data and produce results without storing that data. I'm really excited to dive deeper on this technology and learn more from Kevin. So, welcome Kevin.

Kevin Killens:

I appreciate it. Thank you very much, Debra. It's great to be here and look forward to our session today.

Debra J Farber:

Excellent. So, obviously, to start, I think it would be really helpful for you to tell us a little bit about AHvos t hat motivated you to found this AI focused company, and a little bit about what your career trajectory looks like? I know you were in the Army or Navy. You were in the military, right? [Kevin: Navy] Awesome. Tell us a little bit.

Kevin Killens:

I was in the Navy. This company is a very personal effect for both my partner and I and we're dedicated to it for a variety of reasons, obviously, but we saw that there were a lot of companies out there that were performing AI and coming up with various models. Two things stuck out to us: 1) One was that the change in AI was really not that significant since the mid-century, last century. The main advancements that were made were hardware and the amount of digital data that was available. So, my partner and I said, "Okay, there's got to be a better way of doing this. So we started to look into it more deeply, and the main reason that we focused on it was that we both have family members that had medical issues. We wanted to find solutions for medical issues, and that's the reason why we started working with the models that we have with our engines. Because those medical issues were so personal to us, one of the key elements was that the medical issues that we had family members experience were not very common medical issues. We all know pharmaceuticals focus on the dollars, and so smaller issues were not things that they would place a lot of heavy emphasis on. So, we decided that we would come up with a system that we could address both the large level medical issues but also the smaller, more niche or more rare conditions. That's what kind of prompted us to start it. As we started moving down the path, we found that there were a lot of commercial applications to what we do, and so we obviously started working on that path as well.

Kevin Killens:

Our ultimate goal is to be able to provide very inexpensive solutions for the medical industry, and it wasn't what I started out doing in my life.

Kevin Killens:

I started out in the military pretty much about a year out of high school and spent some interesting time in various parts of the world. I was actually someone who hunted submarines, so that was an interesting component. Flew in helicopters, hunted submarines, did a variety of other things, and I found myself in the middle of some times that I didn't expect but glad to have served. While it wasn't technical as we look at it today, hunting submarines in the military required a lot of technical understanding, and so that kind of started me off on the technical path, even though I took my first programming class in 1980, way before probably most of your listeners were born. That being said, after that I got out and I started going down the technical path, started everything from consulting to various small businesses that I've owned, worked in fintech for about 25 years. Since that timeframe, my partner and I, about seven years ago, started working on AHvos. It's very exciting for us to be able to bring this to the market and look forward to seeing where it's going to go.

Debra J Farber:

I appreciate that. That's super helpful. The reason that you're on the show is that you took a particular privacy-related approach. I would love to hear about some of the issues you saw around AI and privacy and what that architectural approach is that you took for AHvos. I know you have several products. Maybe if it's at a higher- level philosophy or if you wanted to go into your approaches to each product, I'll leave it to you.

Kevin Killens:

We did take a privacy approach and it's very important to us. I'm a bit of a privacy hack myself, because I don't believe that people have a right to our data, our online personage, if you will. I think that it's very important that, as the world grows increasingly more connected, that the connection is isolated to what we give access to, not what we put online. I think there's a big difference between what we actually agreed to give people access to and what they actually access. I'll give you a perfect example. If you look at the EULAs - the End-User License Agreements - no one ever reads those. They're so long and obfuscated and companies do that on purpose so that you don't read them. That's when they're able to adjust things.

Kevin Killens:

What we did at AHvos. We said, o"Okay, we're not going to do that. We're actually going to establish our engines, our models, to where they don't take advantage of people's personal data, where a company can be assured that their data is not going to be shared with other companies or made a part of our larger AI model." Our engines are designed to be able to learn concepts from customer data. They're not designed to keep the customer data. When a customer actually injects data into our engine, what they're doing is they're giving our engine the capability, or the opportunity, to learn from concepts. Once we ingest that data, it's gone. We don't put it anywhere. We don't keep it. The reason for that is that's not our data; it's the other company's data. I think, when it comes to architecting models that we need to understand that we would want our customers to have the same security, the same privacy that we have and that we want. It's just like the Golden Rule: do unto others as you would have them do unto you.

Debra J Farber:

It's not "extract all the monetary value out of humans as much as possible and track them all across the world." It's not that.

Kevin Killens:

It's very much like The Matrix. They look at us, instead of a battery, it's a data source. They try to extract every bit out of us as they can. So, anytime they have that opportunity, I find it interesting that many of the tech moguls don't allow their children to get online. [Debra: Agreed] But, they have no problem doing that with our data. If you look, it's interesting that when you say something, that all of a sudden you have ads pop up in your social media, et cetera, to find out what you want or to promote something. Well, we don't do that. You own the engine that we create for you. That engine is there to serve our customer. In doing so, what we do is we make sure that you have the ability to use it for what you want to use it for. No one else at our company can see any of the data once it's processed. It comes in, it gets processed. We can't do anything with it, but our engine is then educated to do what you want it to do in order to be able to provide you the answers you want.

Debra J Farber:

Could you unpack that a little bit for us? It has to be still processing data. If you're ingesting it somehow, even though you're not storing it, it's still going to be subject to data processing regulations like the GDPR. Right?

Kevin Killens:

But here's the important part about it. It comes in, it gets processed. You could take our engine. You could look through that engine ad nauseam. You will never find anything related to it. You can't reverse engineer it. We learn the concepts. It's like if you were talking to me about algebra. There is a problem. You can teach me concepts about algebra without having to have specific problems.

Debra J Farber:

That makes a lot of sense. You're generalizing, almost - to make the context?

Kevin Killens:

For example, if I took and gave you a problem, two sets of parents where you have A and B and first and C and D, and last. Right? Part of algebra is called 'FOIL': First, Outer, Inner, Last. It's a way you multiply and you come up with your various different results sets. Well, I can tell somebody FOIL, but they don't know that the data is A, B, C, or D. But if they see A, B, C, or D, they know how to act on it. Therefore, our engine understands how to get the results sets, but it doesn't keep the data.

Debra J Farber:

Do you create an ontology that overlays this, or is it the AI itself? Actually, we're going to get into that, because I'm going to ask you what is Contextually Responsive Intelligence? I guess I'm going to ask you prematurely. Since privacy is all about context, I definitely would love to unpack this technology that you've put in the market.

Kevin Killens:

Sure, absolutely. We have CRI and it is Contextually Responsive Intelligence. The reason why we have that is because. . .look, I know all the rage is to have LLMs and to have generative AI. It's interesting because generative AI can create a document, but it doesn't necessarily mean that the document's correct. We've all heard the stories of people who use one of the various different generative AI models and they used it for a professional reason and it wound up biting them.

Kevin Killens:

What we do is we say "wWhat would you like to know? Then they tell us. Then we say, okay, great, send us the data that you have. Then we will extract from that data what you want to know, based on how it's labeled, based on a wide variety of things. What's exciting about it is that we give you the answers you're looking for rather than just giving you an answer There are a lot of, especially generative AI, it will give you an answer. It may not be right, but it'll give you an answer. The great part about what we have is we can actually ingest the data. Data comes in. As it comes in, it does momentarily get placed in our file, in a file, but it is consumed as it is getting placed there. So, it's milliseconds that it's actually sitting there.

Debra J Farber:

I'll just point out to the audience that, while it doesn't take a company who would be using your product out of being subject to the GDPR if it's being used in the EU, it's still great because this is like security for privacy. You're reducing the attack surface, you're making it near impossible to gain access to that raw data. You're abstracting it out with this Contextually Responsive Intelligence. So, I just wanted to make that distinction. I wasn't trying to say it's bad that it captures the data for a little bit, just that because it is processing data, if there's personal data that's being processed, it's still gonna be subject to the GDPR if it's in the EU, which again is not a problem in and of itself.

Kevin Killens:

No, it's actually not, and we welcome that. The reason why we welcome it is we think transparency is key. So, as we bring in data, what we do is we learn the concepts that want to be done from that. So, for example, let me give you an example. Let's say you send us medical data. Well, medical data is very - how do you put it? - it's very recognizable to a large degree. When they start talking about using words like 'oncology,' they start using words like 'symptoms,' et cetera. There are large sets of words that are used for the medical community that you can identify that as 'medical data.' We can learn "Okay, this is medical data" without having to say "well, this patient had this condition and so therefore, we know how to deal with that data without having to deal with the specific patient and what data was represented in that file or in that data transmission from them. Does that make sense?

Debra J Farber:

It does. It does make sense. It's almost a little bit like data discovery platforms, how they try to find what is structured or unstructured data across their environment, and find the personal data or find the medical data. But here, you've got a flexible platform for - it sounds to me at least - for a variety of use cases, clearly not the purpose of only data discovery; but here, specifically for AI rather than for a GRC- style, find- all- the- data governance purpose right here. It sounds like you're doing it for "hHow can you find and make relevant that data in the moment for what the person is searching for?

Kevin Killens:

Absolutely. So, for example, you may have data where you're looking to identify a particular item. One of the things that we have, and we'll talk about this in a moment, is the ability to identify and correct inaccurate dataset labels. So, that's just a single use. Other things are, for example, when you take a telephone answering service, we can actually take data from them saying "Here are our phone calls, and then what we can do is we can actually take in, we can take that information and then find out what they wanna audit from, get examples of that, and then we can then look at all of their phone calls and say "yYou should audit this one, you should audit this one." and then we can actually go through their entire list of their entire sets of phone calls and be able to tell them these don't meet your standards. And right now that just isn't possible, especially in the timeframe that we can do it in.

Debra J Farber:

Fascinating. Okay, well, let's turn our attention to a little bit around AI model bias. Typically, I stay on topic to privacy as much as I can, but because there's so much going on in the AI space with the overlap, and because you have such an interesting solution that deals with it, I'd love to talk a little bit about bias. In your opinion, what's the leading cause or causes of AI model bias?

Kevin Killens:

One of the things that we address in our CRIs, we only use the customer data. We don't have engines that have been trained on data, either straight from the Internet or large Gutenberg project, only type data. There's a wide variety of models out there that train their engines or train their models based on publicly- available data, scraping the web, and, as a result, it goes into their model. And, that is the base that people use if they want to fine- tune that model, if they want to obtain answers, especially in the generative world. The generative AI - what they do is they actually take large amounts of data to be able to create a language response; but then, whenever you ask for something or you try to fine- tune that, what happens is it actually dilutes whatever your data is with the data that is already in the model.

Kevin Killens:

So, when it comes to model bias, we've all heard about the various different models out there that have a trillion parameters. Well, those parameters are actually set by machine, by algorithm; but those algorithms, those machines, are adjusted by humans. So, it can be biased based on what those parameters are. When you look at human- created algorithms to set the parameters, then you can actually influence a generative AI, or any AI, by adjusting how those parameters are set. In addition to that, you can actually put in, by putting in tremendous amounts of data prior to the customer using it, it can actually cause that model to be biased based on the data that you put in. So if, for example, you have two different types of a cancer treatments and one the generative AI or the AI model is completely trained or primarily trained on Type A treatment versus Type B, then that model will be biased toward what it's been trained on. So there's more to training AI and educating our engines than what people understand. You will only get a composite of what's been put in and so-.

Debra J Farber:

Garbage in, garbage out, as we hear yeah.

Kevin Killens:

Right! Garbage in, garbage out. Then, the other component to that is where's the oversight? Who is watching how these models are set up? Who is monitoring whether or not they're being intentionally biased? We all know that there are ethics teams that have been laid off right after enormous rounds of investments were made into various different generative AI platforms. Whenever you see someone disband an ethics team or an oversight team, the first question needs to be: "why? Those are a variety of reasons, and it can be legitimate reasons to do that, as long as you replace it with another model to ensure that you're not providing biased answers. One of the things that I think is important is, like I say, transparency: to let people know what you're doing. We don't use data other than the data that the customer provides.

Debra J Farber:

And this is stored in. . . controlled, I should say, by your customer. Right? Like the company. . . or is this a managed service that AHvos would manage for them?

Kevin Killens:

The data we receive comes directly from the customer.

Debra J Farber:

My point is are you managing it on their behalf and helping with the models, or is it controllable by them, with them owning the access controls and all that?

Kevin Killens:

It is. We actually have our engines in a space on Trinsic. Those environments are controlled by us. What gets injected into those engines is controlled entirely by the customer, because they have the data and they have access to an API which then injects that data into our engine. So, we don't. . . it's not like we have their data sitting somewhere where they then they tell us what to inject. What they do is they send us that data, we put that in, and it only uses their data.

Kevin Killens:

Now, we work with customers. We have a business- to- business model that we have right now. Ultimately, we will go to the consumer with an alternative to our current LLM generative AIs; but, it too, will not not share their data. So, it's very important that people understand that, and then the customer can inject that information in. It gets digested. We don't keep the data. So, now they have this engine that's in there, that they can actually use to get the answers that they want, but they don't have to worry about. "Well, what about my competitor? Can they access my model? No, they can't. So it gives you the confidence that your data is secure. It's your own data. Your privacy is maintained for you.

Debra J Farber:

And that data is not going to go to train models elsewhere.

Kevin Killens:

Absolutely not, absolutely not. Each engine is its own entity. Now, we can have our engines work together, but if they do work together, they only work together with the customers' other engines (if they choose to have additional engines). So, we do not allow for customer engines to work with outside engines.

Debra J Farber:

Okay, that makes sense. Thanks for that. I know a lot of people online, when I'm reading about AI, a lot of it is through online reading, people are usually talking about removing bias from AI training data sets, but what I've learned from you is that there's also a need to remove that bias in AI testing data sets. Tell us a little bit more about that. Why is that the case? What could go wrong if you don't de-bias the testing data sets?

Kevin Killens:

Sure, absolutely. Any data set, whether it be a training data set or a testing data set, needs to be as accurate as possible, and here's the key about AI. AI takes enormous amounts of data. Now, the great part about it is we only need about 25% of the data that other models need. We're able to reach our point where we can be accurate at about a quarter of the way to what they need.

Kevin Killens:

A good example is we actually took the Amazon what they call the 'speech mark,' or the multi-language Amazon response corpus (and I may be mis-quoting that), but the mark. . . and then, what we do is we take that data set - it's about 80,000 reviews from Amazon - and we take that data set, and then what we do is we actually inject that into our engine very much like they did. Well, they take an already trained model and then they trained this. They took this trained model, which could have taken anywhere from months to years to train; they inject the fine- tuning data in about 80,000 records; and then, they run the test sets in those test sets. To do the test sets and the fine- tuning take them about 10 hours. We take our same engine from scratch. We take our engine, which does not have any prior training; we run the fine- tuning on it, which we only take about 20,000 records versus that 80,000; and then, in addition to that, we then run the test through. We do that all in seconds versus 10 hours, which is a huge opportunity when you look at needing fast responses.

Kevin Killens:

The key to getting accuracy is having accurate data. So, we actually detected in that particular data set that there are some inaccurate labeling for records. And so, what we did was we're able to take and identify where that inaccuracy is at, and then we can also, should we choose to, be able to correct those sets. So, the great part about it is, if you put test sets in that are inaccurate and their model doesn't understand what those test sets should be, they're still going to give an answer, but it could be very much an inaccurate answer. So, it's critical that your test sets have as much accuracy as possible so that you need to know what the actual answer is. i"Is this auditable or is it not? Well, if I put inaccurate data in my test set, I may come up with saying it's auditable when it shouldn't be, and what that does is that then uses resources from that company to actually do auditing that they don't need to do. Or, on the converse, n"No, this shouldn't be audited and therefore they skip over it. And now they have someone who is doing something incorrectly in their staff who does the calls specifically addressing the telephone answering service, and that affects the results for the customers that they have.

Kevin Killens:

So, testing data sets, it's very important to have. The more accurate the data sets, the more accurate the education or the training. We can help customers out there who have large data sets that people use for either training or testing, and there are companies who sell those. We can help them correct some of the inaccuracies they have in those sets so that we can all have a better product out there. We're not here to replace anybody. What we're here to do is to help the overall industry to become better. That's our tagline: "better AI, making AI better.

Debra J Farber:

I like that. That's actually something, you know, you've got to put that on a T-shirt because it's a good line. Tell us a little bit about the product that AHvos has: Data Crucible, which is your solution for identifying and correcting inaccurate data set labels.

Kevin Killens:

Sure. Data Crucible has the ability to, when you give us a data set, for example, you'll say, "Okay, here's all the data and here's the label we want" on something like, for example, is this a large, medium or small? And inevitably, whenever we receive data sets, there's going to be some that you can just visually see are wrong. There's going to be some that are accurate, obviously; the more the better. Then, there'll be some that are blank, because either it wasn't labeled or you got skipped over, or maybe there was a flaw in the process, whatever the case may be. What our system can do is it can go, evaluate the set of data, and say w"ell, this should be a large and it says small, this should be a large and it's left blank. This could be a small and it's done large. And then what we can do is we can do one of a couple of things we can automatically correct those or label those for a company. Or, what we can do is we can actually take that data set and say, h"Here are the ones that we discovered have inaccuracies, what would you like us to do with them? And give the customer the opportunity to correct their data if they choose to. Or, they say, "Well, that's such a small percentage, we're not worried about it.

Kevin Killens:

In all reality, I would think that that wouldn't be the case. So, we give the customer the opportunity to act on their data. Once again, privacy. We're not going to automatically act on someone's data unless they give us the go ahead to do so.

Kevin Killens:

It's important that people respect other people's data. If I gave you $100,000 and I said, h"Hey, I want you to hold on to this for me, but you went out and you shared that with a bunch of people on the streets, I think that people could understand that that would be a gross misplacement of trust on my part. Well, it's even worse with data, because data is something that can be used not only to affect people's lives, but affect people's decision making that affect others' lives, and so if we're going to train, if we're going to test data on behalf of the public or behalf of other companies, then we need to do the best we can do in order to be able to provide them what they should feel is well-placed trust, by giving them opportunities to act on their data in the way they choose to and also showing them anything that they may believe is accurate when it's not, or vice versa.

Debra J Farber:

Thank you so much for that. That's a great explanation. What advice do you have for our audience of privacy engineers as they tackle AI challenges in their own organizations. I define 'privacy engineers' as those who work on privacy as researchers, as designers, architects, developers, data scientists - so a broad technology focus. If there's any of those groups that you'd want to call out, or just generally, what advice would you have?

Kevin Killens:

Well, my first statement would be everyone is a privacy engineer. That's from the customer who owns the data, all the way down to the person who may be answering a support call, someone who may be actually addressing the data, a researcher, etc. If you're not a privacy engineer, then you're part of the problem, because, in the way that AI works, everyone has the opportunity, "when they see something, to say something. So you're a privacy engineer; speak up. It's really important. One of the things that we have within our company, if anyone sees something, it is their responsibility to say something, and no one's going to get in trouble for saying something in our company.

Kevin Killens:

Now, I understand that, more often than not, this is a contentious aspect in the AI world today. Everything is about speed. There was recently an announcement made about a new model by one of the major companies, and then it was found out that their model, their demo, was canned. [Debra: It was Google] Yeah, exactly. So, that's a race to market before you have a finished product. That's a race to market and, as a result, there's some scathing responses out there. We can't do that with people's data. The reason we haven't come out with our alternative to LLMs, so to speak, is because there are safeguards that we need to place around that.

Debra J Farber:

And that should be in the MVP, right? That shouldn't be in the "wWell, we'll race to market and see what people think about it and then we'll add controls later.

Kevin Killens:

It has to be in the MVP. It can't be, and to that point, that's a great point that you make, Debra, related to your privacy engineers. What your discipline ensures is that when an MVP comes out, that it includes the privacy. So many times what you see is there is not privacy embedded in with developers in the DevOps routine, and if you don't embed privacy as a discipline there, it will not exist.

Debra J Farber:

A lot of the times, it's baked into the architecture. If you try to deal with it later, you might have to rebuild your entire architecture and tech stack and. . .

Kevin Killens:

You will. That's what I was about to say. We all know about 'technical debt.' Every company out there has technical debt. Technical debt is like the national debt: it never gets paid down. The reason is because that debt is almost insurmountable for companies. In a fast-paced world, going slow is going fast. If you go slow, make sure that privacy is within your AI model and architecture. You will go fast in the end because you will not retread; you will not have PR issues; you will not have lawsuits around your data privacy. That is absolutely critical. There are people out there right now that are. . . there's lawsuits going through the courts around data scraping, around data leaks, and you are on the forefront. You're on the edge of the knife and that cutting edge that keeps companies accountable for what they do.

Debra J Farber:

There must be so much pressure within, not just privacy engineers, but engineers working on AI generally, if they work for large companies that are feeling like they're in an arms race against one another with these different models that they're coming out with, especially generative AI, then it's really hard to - I'm not saying they shouldn't do it; of course they should speak up - but, it's really hard to tell everyone to slow down when the train has already left the building. You know? But, I do agree with you because, like Dr. Rumman Chowdhury says when she gives speeches on this topic: it's like when you add breaks to a car. Back when a car was being developed for the first time, by adding breaks which slow you down literally by design, you actually drive faster because you have trust in the system that you're in. You have trust that they'll work when you need them; and therefore, you trust yourself in going faster in that car. So, I absolutely agree with you.

Debra J Farber:

I've been repeating this. This is not the first time I've mentioned that parable from Dr. Chowdhury, on this show at least; I think it's just the perfect summation of what you just said. Even though you feel like you might be going slower, if you're adding privacy protections, security protections, you're thinking about the long-term and other legal requirements, copyright, whatever else is out there. If you're not thinking about that holistically and building that into the MVP, then even though you might have a lot of traction up- front, like OpenAI does, you might not be ahead in the long run because you won't be able to compete with companies that can sell products to enterprises that can assure certain things. Like we're not going to be ingesting the wrong data; it's going to be un- biased, you've got to threat model, and then figure out how you're going to tackle that. But anyway, I'm pretty much saying, "I 100% agree with you." That's great advice.

Kevin Killens:

I appreciate it, because speed cannot offset privacy. You can't go into surgery saying, y"ep, we're just going to run through this; we're not going to think about contamination, sterility, etc." If you do, the patient's going to die. That's what's going to happen with us, if we do not understand that without putting privacy at the forefront of how we treat our customers, customers will stop trusting us.

Debra J Farber:

That's true, especially in the enterprise space, or I should say the business space, compared to consumers, because consumers can use a lot of these LLMs for personal purposes. Maybe they don't know it at the beginning, but eventually they start to realize, "Oh, it's not always accurate. That could be a lesson learned across people across countries that there are - I won't call them hallucinations - there are lies, there's straight out lies that sometimes the responses will give back. That's something that can handle, I think, a little lower fidelity if it's going to help you as a personal assistant of some sort. But, if you've got a business that's making decisions about people and other important things, you want to make sure that that's accurate. So, what I hear (and I'm wondering if you're hearing the same thing) is that even though everyone and their brother has been deep- diving into AI, trying different apps, using different models and maybe in their own companies, maybe playing around and testing with some things - what I'm hearing is that very few large companies have actually deployed some of these generative AI models at least.

Kevin Killens:

Absolutely.

Debra J Farber:

Because there's so much risk and they have to first define what that is and then figure out a way to address it. Then, it might be limiting by the nature of how the generative models ingest data in the first place.

Kevin Killens:

It is true. The key is that most of these corporations don't even really know how they want to use AI. So, one of the great things that we have with our partner, Trinsic, is we actually sit down and discuss with them what are the challenges they're running into and how can we solve them, once again, contextually with what they need. It does not clear us of what we need to do just by telling people "on't use this for anything important, which is what the generative AI folks say. You know, "this is just beta, or you know this isn't to be used for anything that is critical." People are going to do that, and we have to make sure that we understand that we put out a product that could be used for something serious; and if it's not ready for that, then we need to make sure that we protect our customers by helping to put the security, and definitely the privacy, around things.

Debra J Farber:

Makes a lot of sense to me. Well, Kevin, how can people reach out to you? What's the best way and what's your website, some contact information before we close today?

Kevin Killens:

You can reach us at www. AHvos. com and also at www. T rinsicTechnologies. com, which is our partner. You'll be able to get to us. We look forward to any questions people may have, any opportunities that they feel like they see within their own marketplace where we can help them. Once again, it's important that people understand we do put privacy at the top of our priorities because without it the rest of it's just a house of cards.

Debra J Farber:

Well, Kevin. Thank you so much for joining us today on The Shifting Privacy Left podcast. Until next Tuesday, everyone one will be back with engaging content and another great guest. Thanks for joining us this week on Shifting Privacy Left. Make sure to visit our website, shiftingprivacyleft. com, where you can subscribe to updates so you'll never miss a show. While you're at it, if you found this episode valuable, go ahead and share it with a friend, and if you're an engineer who cares passionately about privacy, check out Privado: the developer-friendly privacy platform and sponsor of this show. To learn more, go to privado. ai. Be sure to tune in next Tuesday for a new episode. Bye for now.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

The AI Fundamentalists Artwork

The AI Fundamentalists

Dr. Andrew Clark & Sid Mangalik
She Said Privacy/He Said Security Artwork

She Said Privacy/He Said Security

Jodi and Justin Daniels
Privacy Abbreviated Artwork

Privacy Abbreviated

BBB National Programs
Data Mesh Radio Artwork

Data Mesh Radio

Data as a Product Podcast Network
Luiza's Podcast Artwork

Luiza's Podcast

Luiza Jarovsky