Serious Privacy

Scary yet celebrated: Spooky AI (Woodrow Hartzog)

November 02, 2023 Dr. k royal and Paul Breitbarth and woodrow hartzog Season 4 Episode 40
Serious Privacy
Scary yet celebrated: Spooky AI (Woodrow Hartzog)
Show Notes Transcript

Paul Breitbarth of Catawiki and Dr. K Royal connect with Woodrow Hartzog, Professor of Law at the Boston University School of Law. He also has some other academic roles, including at Washington University, Harvard and Stanford. His research focuses on privacy, media, and technology. Recently, professor Hartzog testified before the Judiciary Committee of the U.S. Senate in a hearing on Oversight and Legislation on Artificial Intelligence


Last summer, Serious Privacy released an episode on Artificial Intelligence in the wake of the European Parliament’s adoption of the EU AI Act. And although negotiations in Europe are still ongoing, it seems agreement on this new law is close. In recent weeks, the White House has released a blueprint for an AI Bill of Rights, as well as an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. And on the day we record this episode, 1 November 2023, the UK Government hosted an AI Safety Summit at Bletchley Park. 


If you have comments or questions, find us on LinkedIn and IG @seriousprivacy @podcastprivacy @euroPaulB @heartofprivacy and email podcast@seriousprivacy.eu. Rate and Review us!

Proudly sponsored by TrustArc. Learn more about NymityAI at https://trustarc.com/nymityai-beta/

#heartofprivacy #europaulb #seriousprivacy #privacy #dataprotection #cybersecuritylaw #CPO #DPO #CISO

This is a largely automated transcript. For accuracy, listen to the audio

[00:00:00] Paul: Last summer, we released an episode on artificial intelligence in the wake of the European Parliament's adoption of the European AI Act. And although negotiations in Europe are still ongoing, it seems agreement on this new law is close.

And in recent weeks, the White House has released a blueprint for an AI Bill of Rights, as well as an executive order on safe, secure, and trustworthy AI. And on the day we record this episode, the 1st of November, the UK government hosted an AI safety summit at Bletchley park. So our guest today is professor Woodrow Herzog, professor of law at the Boston university school of law.

And also some other academic roles, including at Washington University, Harvard, and Stanford. And his research focuses mainly on privacy, media, and technology. But the reason we've got him is that recently, Professor Hartzog testified before the Judiciary Committee of the U. S. Senate in a hearing on oversight and legislation on artificial intelligence.

So, lots to talk about. My name is Paul Breitbarth.

[00:01:11] K: And I'm K Royal and welcome to Serious Privacy. So I had to, I had to stop myself from laughing when you're like, and other academic endeavors. I mean, is there something that Woody hasn't done?

[00:01:25] Paul: Fair enough.

[00:01:26] K: I mean, right? I mean, we, we

both come from, 

[00:01:30] Paul: The list of credentials is long.

[00:01:32] K: it really, really is, and coming from two people who would love to be doing it full time, I mean, yeah, Woody, all, all bowing to you.

Okay,

[00:01:42] Woody: But check's in the mail, check's in the mail for y'all.

[00:01:44] K: ha, ha, ha. Unexpected question.

Is Halloween A holiday.

[00:01:51] Woody: that is such a good question. I will, I will do you one better and say Halloween is not only a holiday, it is the best holiday. For so many different reasons.

[00:02:07] K: Right?

[00:02:08] Paul: Well, let's hear them because I don't get it.

[00:02:10] Woody: okay, so for one... It is a holiday that's distilled down into pure joy, right? So as you walk around the streets, you see everyone having a good time.

You can be as chaotic as you like that, that it's it's full of delicious candy. It takes place in fall, which is the best season and they're crispy leaves on the ground. And my, my wife pointed out to me that people also love Halloween because you're not forced to spend time with your extended family, like you are at other holidays.

[00:02:42] K: That is true. That is true. And yet you can still decorate with all the lights and the statues and the, the everything. Oh, it's so awesome. I will say that the question was asked yesterday on LinkedIn. And my immediate response is yes, it's a holiday because when people ask my favorite holiday, it's Halloween.

Clearly. But then I was like, what is it? It is to me. It is to Woody. So Paul,

[00:03:12] Paul: No, I mean, I happily accept that but for me Halloween is this It is a very American thing, let me, 

[00:03:22] K: That is true. I brought you Halloween candy.

[00:03:25] Paul: you did, and it's, it's, Halloween is crossing over at least to the Netherlands, to other parts of Europe and it is being celebrated more and more but traditionally it is not a thing that we celebrate in the Netherlands, also because on the 11th of November we celebrate Saint Martins, which is, Not as scary, but we, the kids do go door to door with lights and collect candy and all of that, and that's just 11 days from now.

So Halloween is, is basically encroaching up on that a little bit, so we were, we're replacing our own traditions with the American holiday of, of Halloween.

[00:04:00] K: Did you never watch Charlie Brown and the Giant Pumpkin?

[00:04:05] Paul: No.

[00:04:07] K: So deprived. this is Snoopy and Linus and the Great Pumpkin Patch. Oh, you're so deprived. With that, I guess we have no other choice but to start talking about AI.

[00:04:21] Paul: I guess so.

[00:04:23] K: So, shall we do that then? So, here's the thing I did propose this show right on the heels of your testimony to Congress on AI absolutely fascinating. I'm like, you know, we haven't done a show on AI in a while, you would be the perfect person to have on to talk about it. It's like, The topic on the tip of everyone's tongue, and I can't even say that three times fast.

It's a solution in search of a problem, but it's been around for so long, and now people are starting to get in an uproar about it. So, what would you like to share first?

That's a very broad question.

[00:05:03] Woody: So, I suppose the best starting point is to think about the discourse around artificial intelligence. And to understand that AI is not some one specific technology, particularly as we've come to understand it. So, I like to think of AI kind of vaguely and ambiguously and then hone in on very specific deployments or applications of it.

In the same way that we used to talk about big data. And in many ways, that conversation has just sort of merged into the AI conversation. I think

[00:05:36] K: because it makes it bigger data

[00:05:38] Woody: right. Exactly right. AI is just bigger data. Plus you know, a touch of of futurism or anthropomorphism that causes people perhaps to think about it differently.

Now, a lot of the deployments that we're seeing now, what it People referring to as generative A. I are they've also been around for a long time, but are getting deployed in ways to feel new and revolutionary to us,

[00:06:05] K: and mainstream. 

[00:06:06] Woody: right,

[00:06:06] K: mainstream is the big issue.

[00:06:09] Woody: right, right, right. Although even that even the bar for sort of what counts as revolutionary has been lowered so much because, uh, you know, it's just been year after year of sort of incremental development of technologies.

And so I think it's important just to be specific about what we mean in context when we're talking about specific problems of AI. The other thing that I want to sort of start with foundationally is to make clear that When we talk about A. I. This is not some, uh, unknowable thing or some force beyond all human reckoning.

So the singularity if it's not going to happen, it's certainly not anytime within lifetimes. And so this conversation often tends to merge with our perceptions of science fiction and really unhelpful ways. And and it and what I mean by that is there's a couple of things going on, one of which is any, what's the old saying?

Any technology of sufficient complexity is virtually indistinguishable from magic.

[00:07:18] Paul: Well, that's certainly true for chat. 

[00:07:20] Woody: yeah, and we tend to do AI that way, but that's a dangerous way to think about AI. Because it's just people that work for companies or governments that are using computational systems. To combine data with algorithms to make predictions about things, right, or to accomplish certain sorts of tasks. And I think you have to think about them as in that way.

It's people all the way down, because otherwise you tend to attribute Agency and almost a kind of inevitability or mysticism that just isn't warranted, I think, with AI. The other reason that I think it's important to demystify AI and to understand that it is just people doing algorithms and data is that There's so much talk about the when we talk about artificial intelligence is like superintelligence and, and the idea that all the AI is going to sort of take over the world and it'll be unstoppable and create a mind of its own.

And, and that really is just a casual way. For I think industry to deflect the fact that A.I. Is being implemented in systems right now in ways that are doing, I think, some creating significant dangers for people. And it distracts from that conversation. And so I'd much rather pull back a little, you know, let's let's let's pull back a little from the super intelligent narrative and from the A.I. Is going to take over the world narrative. To a little more realistic place about how A. I. Is actually being deployed right now and is probably going to be deployed. In In foreseeable ways in the near future, and let's talk about what we what sort of legislation we need for to handle the problems that are created by those kinds of designs of them.

[00:09:09] K: Right, so where you're getting at is it's not quite so scary, or it may be scarier, or should we celebrate it like a holiday? So,

[00:09:20] Woody: I mean, that so I, I will say that, In using a lot of AI myself, I see some of the things that it can do and some of the efficiencies and some of the ways in which it's entertaining or helpful or useful. But my natural inclination after studying information technologies and privacy for the past. 15 or 20 years is to be pretty skeptical at this point, not because of the technology, but because of the forces that have long been powerful in our society that, and what they're inevitably going to try to do with those technologies.

[00:10:03] Paul: Well, I mean, part of what we're seeing, of course, are the, the, the techniques that people think are fun. Chat JPT is, is, is one of the mid journey for sure with all the, the images, but also more recently. There are now tools that can make videos that translate speeches of world leaders into any language.

And it sounds like they are speaking that language. I've heard Donald Trump speak Dutch, which is even scarier than Donald Trump speaking English.

that also brings the, of course, that brings a big risk of fake news and how to recognize it. Because to be honest, the quality is, is pretty impressive

[00:10:43] K: yeah,

[00:10:44] Woody: Oh, absolutely. I mean, I think that this is one of the most important conversations that we need to be having right now, which is. The the larger problems that A. I. Makes worse. So 

[00:10:58] K: a good way of putting that, yeah,

[00:11:00] Woody: yeah, I mean, so thinking I've long argued and been critical against facial recognition technologies, which particularly face surveillance issues.

We've had surveillance, but facial recognition makes the problem of surveillance, particularly the harmful effects of surveillance significantly more pressing and worse, I think. And we see this with misinformation and deep fakes as well, which is misinformation and disinformation campaigns have been around for quite some time.

But AI lowers the cost of those in such a way and changes the way that people perceive of mediums in a way that makes it really worrisome. And so I think that any approach to legislating A. I. Needs to understand that it's not wholly new problems that we're dealing with. It's the exacerbation of existing lots of existing problems that might now finally warrants, um, some sort of legislative intervention.

And you brought up the sort of joys of playing with Some of the generative AI systems like, you know Dolly and some of the others that will, you know, you can type in you know, produce an image of, of you know, Woodrow Hartzog at Hogwarts or whatever. And, and,

[00:12:23] K: ah, now that might be a fun one, I actually I refused to try ChatGPT, but I did try the Professional Headshots one, because it seemed, it seemed the, I also tried the one a few months back where it makes all these fairies and sorcerers and everything. I didn't like that one at all. All the women came out mostly naked.

[00:12:44] Woody: Oh, right. So then there's, right. So to say nothing

[00:12:46] K: yeah,

[00:12:47] Woody: of the issue of bias,

[00:12:48] K: right. That goes into it. But I'll say uploading eight to 10 actual professional headshots for them to generate 20 professional headshots. Only one looked like me.

It was like, so I can't say that my experience is as fun.

[00:13:07] Paul: that just shows how special you are. Okay.

[00:13:09] K: Clearly, clearly. But I figured in order to speak about generative AI, I needed to try some of it.

So I've been looking around for some of the least dangerous pieces to, to play with and imaging seemed to be one of them, but right. It's

[00:13:26] Woody: Oh,

I worry about this all the time. And what I really worry about, I agree with you that that as people who are working in this area, we need to have some sort of familiarity with it. But I'm so conflicted when I use a lot of these programs, particularly because I know that that a lot of these sort of gee whiz applications while they're fun to play with, ultimately end up normalizing a lot of our attitudes around these technologies and society.

And, and, and then when I then interact with that, I'm participating in, in, in that own normalization for myself. And, and if I've got a paper, a draft paper that's about to be finalized. With Evan Selinger and Joanna Gunawan called Privacy Nix, how the law normalizes surveillance. And in it, we make the argument that the law is actually complicit in normalizing surveillance. And that unless we change our rules, we're on track to be conditioned to tolerate everything, every kind of and that the reasonable expectation of privacy test is a fundamentally broken test because it relies upon societal norms to set the thresholds for limiting the encroachment of surveillance.

And I've begun to feel the same way about lots of different deployments of AI. And this is why I think that there's a really urgent need to get rules on the books quickly. I think that there's a pretty good argument in favor of the precautionary principle generally for a lot of these technologies because of the normalization effects that happens when companies slow roll these technologies out bit by bit.

And it's the same cycle every single time. There's a little bit of resistance. Everyone goes, Oh, I don't know about that, but it's kind of fun to play with. And isn't this sort of cute. And then, you know, one by one, there's sort of a, a modest benefit that people could get from it and companies then.

Sell these, these technologies at an incredible discount because the real service, of course, is not what they're offering with the human information they can gain off of it. ring cameras, a really great example of this. And the next thing you know, now everyone's being surveilled everywhere because we've all got ring cameras now, right?

[00:16:00] Paul: And the police is watching a long 

[00:16:01] Woody: and there's yet another space in society where we, we can, we can't be without cameras being in our face, and we just become conditioned.

[00:16:12] K: We're conditioned to it. You're right. And it's part of that normalization. And I think you're going to touch on this is that we're also not getting meaningful decisions made by the courts or interpretations because they're so scared of speaking to a technology that they don't fully understand as well as Is their decision going to impact the development or the use of that technology in the future and have a bad impact on it?

That they're really narrowing their decisions to specific facts of the question in front of them Which means we're not getting guidance The only guidance we're getting is the courts don't want to speak to it.

[00:16:52] Woody: Right, the courts are avoiding it, legislatures, well up until recently, which maybe we can, this is where we can start talking about. Up until recently, courts have, I mean legislatures have also, and they're I think the actors that need to be, that are most important in this space. to create some, some significant rules and frameworks to, to decide which AI systems we should be deploying and which ones are just where the juice is not worth the squeeze.

And instead for years, we were sort of caught in this reflexive. deferral to companies because you wouldn't want to do anything that would squelch innovation, right? That the, you know, for years it was that the internet was this little baby and you wouldn't want to, you know, harm the baby. And so don't pass a rule that's going to tell the baby what to do because that could squelch innovation, right?

And, and that narrative now, the narrative innovation for Even though we've seen some incredible movement, regulatory movement in the space of AI, the innovation narrative is still so powerful. And I think in so many ways I think distorted and unjustified and, and is used as a sort of trump card, like at any time companies will say, we want regulation, we want regulation, and they say it over and over.

And then if you propose anything that's not completely in sync with their business model, they, they, they throw their hands up and they say one or two things. One is, Oh, but what about innovation? You wouldn't want to lose out on that. You know, be ashamed if you lost all these fancy ring cameras or whatever it is that we're working on.

to it, to if you pass these rules or they say free expression, right? Like you can't tell me what to do. This is my sort of expression on how I think AI systems should be built. And therefore freshen rules prohibit any sort of encroachment whatsoever in what essentially boils down to the business model to make money.

And so I worry about those two,

[00:18:48] Paul: no, I fully, I fully agree. And it seems that almost everybody around the world seems to agree. If you, if you look at today's statement out of the, the Bletchley Park Summit in the UK, 28 governments including all of the European Union, but also Australia, the United States, China, the UK, Brazil you have it.

They all say together, there is a potential for serious, even catastrophic harm, either deliberate or unintentional, stemming from the most significant capabilities of AI models. And they call on legislation, they call on rules, they call on us being smart and sensible. 

[00:19:25] K: You okay? 

[00:19:26] Paul: is only calling for things and not doing things. So how do we move to that next phase?

[00:19:33] Woody: yeah, this is so this that's such a great question, Paul. 

[00:19:37] Paul: So give me a great answer.

[00:19:39] Woody: I'll tell you what's not going to be enough. And then I'll tell you what I think will be a little closer to, to actually protecting people in, in a world of where artificial intelligence is continuing to being used. develop.

So in my testimony in front of the Senate, in collaboration with some of my colleagues, I'm a fellow at the Cordell Institute at Washington University. And I, along with Neil Richards and Ryan Dury and another fellow Jordan Francis we all have been working for the past few months on a a project that we call the A.I. Half measures projects. And the A. The idea of a half measures. And this is what I said in my testimony is that there are certain concepts that lawmakers seem very attracted to when pulling from the regulatory tool kit for artificial intelligence. And it is critical, of course, for the Regulators to embrace these measures, but if we stop there, the argument we're saying is that these will just be half measures.

They will not be enough to protect us. So what are these half measures? Well, the first half measure is one that we've heard a lot, which is transparency. Lawmakers are quick to say we need transparency and AI, which is absolutely true. But the problem with transparency is that it doesn't fix anything by itself.

That we've had lots of transparency and lots of other realms that I'm just speaking of information privacy, where we've had a fair amount of transparency, at least in certain kinds of data practices and it doesn't change anything all by itself that we have to have meaningful changes to, to practices and honestly, the business models in order to meaningfully effectuate change.

Another half measure that We talked about was the idea of removing bias that we referred to earlier. It is indisputable that AI systems that are being deployed now are harmfully biased along lines of race, gender ability. And and other traditionally marginalized and vulnerable communities, and it is absolutely vital that we have rules and all sorts of other non regulatory initiatives to ensure that that bias is removed from these systems.

But one of the harmful narratives that I've encountered, and I I've had a lot of conversations around this and sat on Massachusetts is commissioned to regulate facial recognition technology. One of the narratives that I've heard in a lot of my discussions is that if we solve the bias in AI, that's what we do in AI.

then that's what will make it safe to use. And a really harmful way to think about it. In many ways the more accurate a AI system becomes for all communities, the more dangerous it gets. And that's because it's going to become more attractive to those in power to use on the vulnerable.

It will become significantly more attractive when it has very low, you know, false positive and false negative rates. For example, in surveillance systems and and that's where we know people of color are going to, to face that sort of the brunt of that newly enables surveillance first and hardest.

[00:23:14] K: Right. Because it can be targeted. It can be targeted and it can be driven

[00:23:19] Woody: exactly.

[00:23:20] K: Yeah.

[00:23:21] Woody: Yeah. And so removing bias in, in a way just creates a more effective tool for discriminating and oppressing people.

[00:23:28] K: But, you know, I will say, the thing about bias that gets me when you talk average business professionals who are not involved in privacy, not involved in technology, and their concern is, so it's junk data. It's not junk data. The data that's going in is accurate data. It's, it's junk mores. It's junk cultural traditions.

All of these things that we do that create that data, makes it accurate data, is just incredibly biased.

[00:23:58] Woody: yeah, I mean, yeah. So there's all sorts of. Yeah. And also there's so much here. There are narratives that I think are being used as pretext to just collect more information. 

[00:24:11] K: Yeah, 

[00:24:13] Woody: often I'll say, well, we just need to collect every all information on everyone to make sure that we eliminate the bias, but that

[00:24:19] K: everything everything

[00:24:21] Woody: right.

That just sounds so convenient. If your industry as an excuse to suck up every piece of information on the right, right, exactly. So, so, so bias mitigation and transparency are half measures. The third half measure is ethical principles. Ethical principles are important, right? So the one when a I first. started becoming a thing that companies started adopting. You saw so many companies saying, well, we adhere to the top ethical principles and we've got embedded emphasis, and we're really working out all the different principles of the ways in which we're going to follow the rules.

And those are all well and good, but without laws, it essentially falls to it turns into a self-regulatory system, and we know that's going to fail. And it's going to fail hard because companies don't have the incentive. And some of them will obligate and ha will argue and have argued that they have a fiduciary responsibility to maximize profits.

And so they're not going to leave money on the table or can only leave money on the table for so long. 

[00:25:28] K: right 

[00:25:29] Woody: ethical principles be damned. And so we need significant rules because self regulatory commitments to ethical principles just aren't going to cut it.

[00:25:42] K: Well, and for some of these entrepreneurs or business owners I mean, how long could you leave money laying on the table in front of them before they're all jumping all over

[00:25:51] Woody: well, right, exactly. And I think that facial recognition is a really good example of this. We actually a hill from the New York Times, which are really interesting article that talked about how many of the larger tech companies for years. Stayed away from facial recognition technology for ethical reasons, for concerns that it was just too corrosive, and it was too harmful, and it was too dangerous.

Which I think is, is to be commended. But then you've got a company like Clearview AI that scrapes every profile it can get its hands on and could create and PIMIs, also creating the most dangerous surveillance tools ever invented. 

[00:26:32] Paul: and governments love them. 

[00:26:34] K: maybe we could defeat that by everybody using their generative AI profile shots

[00:26:39] Paul: Or their Halloween costumes. 

[00:26:41] Woody: Yeah, yeah, yeah. We'll just engage in countermeasures. The problem with, that's a that's a cat and a mouse game. The mouse games is that it never works out so good for the

[00:26:51] K: Really doesn't, does it?

[00:26:53] Woody: Inevitably,

[00:26:53] K: Only for Tom and Jerry.

[00:26:55] Woody: yeah, inevitably cats are pretty amazing hunters and do a

[00:26:59] K: Yeah.

[00:27:00] Woody: job, so I, I'm reluctant to get into a cat and mouse game.

[00:27:02] K: And I said that facetiously, but there is an honest thought behind it. A lot of people's solution to defeat questionable technology or to be able to use technology that others might question, is to come up with a countermeasure that might be equally questionable.

[00:27:21] Woody: Oh, yeah. I mean, listen, I get a lot of I give a lot of talks as you might imagine, and I get the same question at a lot of my talks, which is what can I do to protect myself against these systems? And it ends up being kind of a bummer because my answer is usually not a damn thing that you're going against the most powerful companies In the history of the world in terms of resources and ability to shape are mediated environments and the decisions that are made about us on a daily basis.

And so the idea that I could you know, deploy some sort of technological technique or strategy that would meaningfully resist in in in way that that was either at the margins or really just pure theater. It's just it's not tenable. There is one thing that we could all do which is, and I tell this to all of my students, which is elections matter, and we can vote, and we can vote for people that will promise to create meaningful rules for us, and not just at the federal level, but at the state and local levels as well, these technologies are being woven into the fabric of our everyday lives and, and the issues that they touch, touch on us at issues that are important. local and international. And

[00:28:38] K: That is true. Well, and speaking of that, what'd you think about the, the recent issue by our White House?

[00:28:45] Woody: well, I need a few minutes. So first of all, I'll admit to not having fully read the EO because I think it's 140 pages or something. So I haven't worked my way all the way through it since it just came out. But, I will say that after reading the summary, I think there are reasons to be excited about the EO and that it's it is really broad in scope, and it recognizes, I think, the complexity of the problem.

And I applaud a lot of the initiatives around privacy and anti discrimination and a lot of the things that that you would want to see in an executive order, and at the same time one of the things I worry about, maybe it's just the framing of the EO itself and some of the language that was used, but I worry that the White House has skipped right past the existential question and gone straight into, all right, so we're going to adopt this thing and we're going to see it implemented and let's make sure that it has guardrails.

And right. And one of the things I really worry about that is, is one of the key things that it's important for lawmakers to consider is whether these systems, certain kinds of systems should be deployed at all, designed or deployed at all. And, and that's what I've called the existential system. So I've argued with Evan Selinger for years now that facial recognition, particularly face surveillance, unbalanced, uh, is going to leave us worse off and as a result should be prohibited outright.

[00:30:30] Paul: And it's, and it's already being come, it's, it's becoming so common already. I mean, just boarding an airplane in the U S. You're subjected to facial recognition.

[00:30:39] Woody: Right, exactly. And this is the normalization that's already happening with facial recognition. There's a window where we might be able to think existentially about facial recognition. And it's important for us to do that in the beginning because we still understand a world without it. But once you've become accustomed to it, you start to see, be sort of favorably disposed to it, and it's easier to forget all that we've given up to get it.

It's, it's hard to imagine a world without the Internet. Someone was like, what would you do without turn by turn directions? And, and thankfully, I still remember, you know, I muddled by reading a map, right? And, and we sort of worked through it. You know, I guess it was a tough life, but we somehow made it. Is that we lose our regulatory imagination when things become normalized, when we come to accept them and, and without having a frank discussion about the values that we want to preserve and the bright line rules that we want to create, you know, lose those values then what happens is that, uh, these, that industry sort of takes us along, you know, this journey and we get where we get and it becomes hard to see outside of it.

[00:31:52] Paul: So unfortunately we are running out of time for today's recording because this is a fascinating conversation. So I'm sure we'll invite you back in next season to continue this debate because I also think. This is not the last piece of pseudo legislation that we've seen. The executive order, there will be more and maybe also some some serious action from legislators around the world.

so Woody, thank you so much for joining us today. and on that note, we'll wrap up another episode of Serious Privacy. If you like our episodes, join the conversation on LinkedIn. Like and subscribe in your favorite podcast app or on your favorite podcast platform. You'll find K on social media as @HeartofPrivacy.

Thank you. And myself as @EuroPaulB  Until next week, goodbye.

[00:32:37] K: Bye, y'all.