Politically High-Tech

233- Unpacking Corporate DEI and AI Misconceptions: Insights from Jason Brown

Elias Marty Season 6 Episode 23

Send us a text

Is corporate DEI just an empty gesture, or can it truly foster genuine inclusion? Join us on this episode of "Politically High Tech" where we engage in an eye-opening conversation with the dynamic and confident Jason Brown. We kick off by contrasting the superficiality of corporate DEI initiatives with natural diversity and genuine inclusion, and Jason shares his thoughts on the importance of timing in conversations and maintaining confidence. You'll even get to know some fun personal tidbits about Jason, including his love for long walks on the beach, hot dogs, and his competitive streak in chess.

Moving forward, we tackle the widespread misunderstanding of AI among the general public. Despite AI’s presence in everyday life for decades—from video games to chatbots—many still struggle to grasp its current applications. Highlighting historical milestones like IBM's chess matches and AI in video games, we shed light on the transformative potential of large language models and emphasize the necessity of making AI tools accessible and comfortable for everyday use. Education is key as we prepare for a future increasingly influenced by AI.

Finally, the episode takes a critical look at AI's societal and security impacts. We share cautionary tales about AI-driven scams and underscore the importance of rigorous verification processes. The conversation then pivots back to DEI, where we discuss the challenges of superficial implementations and the need for meaningful efforts that address equity and inclusion issues at their core. Don't miss the recommendation to visit "thejasonbrowne.com" to explore Jason's impactful work, including his TED Talk. Tune in for engaging discussions, insightful anecdotes, and a healthy dose of Jason Brown's infectious competitive spirit.

Follow Jason Browne at...

https://thejasonbrowne.com/

Social Media

Facebook

https://www.facebook.com/thejasonbrowne

YouTube

https://www.youtube.com/@thejasonbrowne

Instagram

https://www.instagram.com/thejasonbrowne/

LinkedIn

https://www.linkedin.com/in/thejasonbrowne/

TikTok

https://www.tiktok.com/@thejasonbrowne_

Sources that I have mentioned to bring short news while combating misinformation

https://thenewpaper.co/refer?r=srom1o9c4gl

https://apnews.com/

If you want to be a guest on my podcast, please join PodMatch for an easy start and process.

https://www.joinpodmatch.com/politically-high-tech

Support the show

Follow your host at

YouTube and Rumble for video content

https://www.youtube.com/channel/UCUxk1oJBVw-IAZTqChH70ag

https://rumble.com/c/c-4236474

Facebook to receive updates

https://www.facebook.com/EliasEllusion/

Twitter (yes, I refuse to call it X)

https://x.com/politicallyht

Speaker 1:

Welcome everyone to Politically High Tech with your host, elias. I have an exciting guest here and I am I have to say it, trust me, I like it. He's pointing at himself right there. I am for it. We need more confidence like that, guys. I think, moving forward, I'm going to expect you to just exude some confidence and if you can't Find a way to fake it, fake it. Well, you're on camera, alright. So, trust me, we're going to have some fun over here. Find a way to fake it, okay, fake it Well, you want camera, all right. So, just, we're going to have some fun over here and I am fulfilling my diversity mission, not the DEI superficial crap Cause, trust me, I was part of one of those groups in my job and it's failing because it's superficial and it's a covert control, is a covert corporate control.

Speaker 1:

So, yeah, I'm happy to kick me out of it. So I take that rejection with pride because I spoke up. I'm a pretty vocal person, but enough about me bashing poor DEI. I am for natural diversity, I am about including people, but don't tell me how to talk. Don't tell me natural diversity. I am about including people, but don't tell me how to do this. You can give me some tips how to improve, but that's not what I'm open to. But if you're going to say something I'm in the middle of talking or I'm too busy, of course I'm not going to listen to you. So strike when the iron is hot.

Speaker 1:

Timing is important, but I don't want to make this about me. There's no point of having this guest here. I don't want to go on my fun, fun moments here, but I have a wonderful guest here and he's a very, very. I just looked up his stats. He is an accomplished public speaker and he's very exuding positivity and I might even ask you for it, which I love. If I don't ask for it, that's better for me, because I was such a lazy person. It would cost some people awesome. But not to ask for it, you're hurting my pride. You're hurting my pride, the heck man. But enough about me. What do you want the audience to know about you, jason brown?

Speaker 2:

alias. What's up, man, my name is jason brown, I am. I'm here to talk about some tech, talk about some, uh, changing the world type stuff. Um, we can even debate on this dei topic, if you want, bro. We can go back and forth on that. I am a human being that loves conversation and loves to dig into it and find these avenues where we can connect in ways that we might not have connected before. So yeah, that's me in a nutshell. I like long walks on a beach that's also me and also like hot dogs. So those are two random facts. And I do like to play chess. How about that? Go ahead, queens, gambit me. See what happens, bruh. See what happens, bruh. You know what I mean.

Speaker 1:

So that's me in a nutshell you heard that you want to challenge the manchester. All I'm gonna do is be a proud, cowardly commentator, but I will talk a lot of crap that's oh, that queen just slay you, didn't you? You got the warning, bro. You got the one.

Speaker 2:

You got the one salty, you know yeah, but uh, I like that.

Speaker 1:

Look at that stuff. Competitors, you know, talk and I'm pretty sure you can back it up. I suck at chess. I'm not going to compete in there, but shoot, I think I let all the minions just get killed. All the pawns, poor pawns, you're going to suicide mission.

Speaker 2:

I mean, sometimes that can work if you're playing against somebody who's overly aggressive. So I get that, I totally get that.

Speaker 1:

And then the horse goes over the place.

Speaker 2:

Yeah, yeah, depends on your opponent and your strategy and what kind of games you want to play to make it happen. So, if anything else, if you're like whatever this conversation unfolds, you're like. You know what. I wasn't really feeling much, except for the chess Holler at me.

Speaker 1:

Let me know we'll get some games together and we'll play some chess.

Speaker 2:

How about that? Yeah, that people. You're that listeners. Yeah, touch with them. Yes for chess, ask if it's not too busy, of course, uh no, but that's all I'm gonna say about that.

Speaker 1:

Yeah, that's a little disclaimer. I got us everything. Let's get back to reality for a second. Let's start with this AI, and then we could mingle a whole bunch of stuff.

Speaker 2:

What is?

Speaker 1:

your perspective on AI in general. Let's just start from there.

Speaker 2:

I think that still, the general public doesn't really understand what's going on. I don't think that if you ask most people on the street, they will say AI. They will say AI. They'll say maybe chat GPT. The average person doesn't know about Gemini, doesn't know about what Copilot is, doesn't know really what these things are, which I found surprising because I'm a tech kid.

Speaker 2:

I was in IT for 15 years and so I was coding PowerShell, virtual machines running big chunks of Penn State University not running. I should say I was a part of the team that did it. I don't want to take too much credit because our department is not there anymore, so I was not running it. So but from an IT tech point of view, I just realized that many people don't get what chat GPT is. In fact, it's surprising how many people still, if I ask you, if I ask them, have you used, have you used AI? They'll be like. They'll be like I think so, and which is probably the case, because many of the systems that we have use AI in the background or have been rebranded to be called AI when it's just programming, which is fine, and so there's a large swath of people that don't get it.

Speaker 2:

And so what I've been on a mission to do, at least in my work in all the spaces that I work nonprofit work, thought leadership work, leadership, work in general is to get people comfortable with the ideas of how they can actually use something as simple as a chat-based AI tool such as ChatGPT or Gemini. And that is still a struggle, but as soon as you show but it's a really easy sell hey, name one thing that's taken you four hours this week to do. Well, I have these things, these things, and I show them in three seconds how to accomplish that thing with a few prompts. Conversations change, perspectives change, and then we have to get in conversation about security and about making sure that privacy and all these other things like that, and so it's a lot of. It's interesting to see just how much, how little, people understand and know up to now, which scares me because I'm afraid that people won't know what's happening over the next five to 10 years in AI, which is going to be huge, huge, so we'll see where it goes.

Speaker 1:

Well, I don't claim to be an expert. I consider myself just a learner, just like a lot of other people. But you know people and I have thrown random couple examples. I said look, we have interacted with AI for decades. Just that loud people don't realize it, it's just open. Ai lived up to its name, has made AI more open to the public right. So, you know I could use video games. I mean when you fight against a CPU that's an AI challenging you right Put on easy normal hard.

Speaker 2:

That's AI.

Speaker 1:

And if you want to talk about sites, even before chat GMT became a public thing, there were chatbots there were. You know, there were things that took place before chat GMT became the thing and there were bots that were censoring and all that. That's AI, all that stuff. I think some people just think, oh, just boom, it was just like you said. So a lot of it was like background usage and you really pay attention to it in the consumer or it's used in such a limited, simplistic fashion. I mean, even some of the feel free to correct me here you're a tech expert, um, and I can solve excel functions I think that was a I you know computating a bunch of stuff I mean it depends on how you define ai.

Speaker 2:

We, we were, we were talking about ai back in the video game days, about you know what is the? What is the? What is the how, how the? Uh, like I was, I would think of, like Metal Gear Solid, how the enemies would move around. You're like what is the? The level of computational power that these enemies have, that that that my opponent has, and that was, I mean, we've gone, even if you bring it back to chess, when you thought about um, when IBM was competing with the grandmasters in the late nins, early 2000s and really trying to set its tone about, about how computationally we can outperform humans in very in various ways in various states.

Speaker 2:

Ai and computer programming and the evolution of how things work in that space has, has, has been fairly consistent. I think one of the major changes is the way that we've these, this large language model aspect of chat gpt, which is it's now become. It now has the system itself. Let's just use chat gpt as the basis of this conversation from a chat bot, ai llm large language model is. It knows a great deal and it could be conversational and it could process and give you responses in many different ways and it can be trained.

Speaker 2:

So, all of these things coming together, we haven't had really an experience where a consumer like you and I had the power of these things that I was just talking about these chess wizards or these AIs that were in Metal Gear Solid or NBA Jam. Maybe the AI there was kind of whack, but it still was a lot of fun. My point is we didn't have access to it. Finally, the general public whether you're tech savvy and you're using a backend of open AI and you're actually using tokens to build applications, or you're someone simple like me that just craves to learn the best way to create a prompt for chat, gpt, the, and the way that these things are being integrated, it's going to be wild, my dude. Um, the way that we will be operating and interacting with technology 10 years from now, five years from now, let alone how we've done it today, and I don't think people are ready for that. Um, especially on the consumer side of things.

Speaker 1:

Sadly, as much as I would love to be an optimist. But I think you're right. Most people are just not ready for it. So somewhere in the middle I've used ChatGBT just to give a political analysis. I caught it as a contradiction. You're going to have to clarify this contradiction, because it doesn't make sense. How could it be A and B at the same time? And then it finally broke it down, so I was satisfied with it. I had just about a casual conversation about electoral processes and all of that.

Speaker 1:

For the record, I'm going to be attacking the MAGA crowd for a second. The 2020 election was legitimate Flaw, but legitimate okay, because it had a few procedural issues that make it illegitimate. Let's be clear about that. Yeah, I mean politically. I hate both sides of the aisle, if I'm going to be frank, but I had to really attack Trump with that because I was just super irresponsible. And if he loses 2024, I don't want to see another crybaby Trump round two because I'll just leave that at that. I don't want to make it too political, but I have to just say this.

Speaker 2:

I mean your podcast is politically high tech, so it makes sense that you're digging into it.

Speaker 2:

I guess one of the things to bring it, though, to the political side is I'm curious to know the impact of how AI will help, will hurt the processes of counting ballots, the electoral process, counting ballots, the electoral process like there is going to be some sort of way that AI will be integrated into the system.

Speaker 2:

It probably already has to some degree, on a scale that we are unaware of, and it will continue to be a part of our everyday life, whether it comes to electing a local official, electing a president, whether it to, uh, cooking a item in the microwave, whether it comes to, um, understanding how your car functions, us connecting with your podcast, every aspect of our lives, the decisions that we make that impact millions and millions of people, and the decisions that truly impact just a handful of few, like a, a family at home these things AI we need to. It is our responsibility to be more aware of what's happening, because the more it gets infused in all these other aspects of our lives, the less control we might have unless we're paying attention. I understand, like you were making a political comment there, but like at the same time, it directly is connected to how we need to be processing the future of ai and how much our actions matter I mean, I thank you for that.

Speaker 1:

We make that connection for you. I'm lazy, so I could just intuitively do it for me. It's awesome. I don't care, I'm shameless. All right, I, you can call me out on it.

Speaker 1:

um, listeners, go right ahead. There's a comment section for you for there. Feel free to express your frustration, your venom, right there. I'm probably not going to read it. Ls on board. Aww, there you go, let's do. Okay, they're bringing it back to high quality. But to AI I already see a couple of negative implications, like a deep fake thing was attacking. There's already a a couple of negative implications, like a deep fake thing was attacking. So there's already a negative impact of AI. And, of course, the robocall thing is ridiculous. I mean, that was even before.

Speaker 1:

Yeah, but yeah that was an AI tool that they were using.

Speaker 2:

Are you talking about the executive who got a call and he was like well, I'll just give this story. So I'm not sure exactly what company it was and I can look it up real quick. Just imagine a Fortune 500 company. A big CEO, gets a message on their phone that says hey, we're working on this deal right now. I need you to send me over the code for your 2FA. And so he's like what he's like just, and the message is pick up your phone. So he answers his phone.

Speaker 2:

Somebody used AI to capture the voice of his colleague. He doesn't know how he did it, but his colleague's voice was mimicked. Was AI trained itself on the model Somebody was like in the old sci-fi movies where somebody is using that voice coder machine and they're like I'm going to be such and such. That just happened in real life. Now, luckily, the gentleman who was the CEO asked a few questions that the other person could not answer, which is what you're supposed to do.

Speaker 2:

But that level of corporate espionage is pretty significant and if you imagine that, that's why it's so much it's really important for us to understand just how AI can be used, getting a call from somebody that sounds like the person you think it is, but it's coming from a weird number. Now we have to double check that. Now you have to say, okay, confirm your identity not only once, maybe not even twice, but three times, before I'm giving you any identifiable information. A really big thing is, if you at this point, though, if anybody ever contacts you even if it seems to be the most real situation and ask you for any personally identifiable information, your job is pretty much say no identifiable information. Your job is pretty much say no unless you meet the person in person. Like there it's.

Speaker 2:

It's getting really hard to distinguish who you can trust and who you cannot, especially in the digital spectrum, and this one company could have lost millions, maybe billions of dollars from just one conversation, and it took him to say what was the book I was talking about three days ago on the balcony of that one building and what was the name of that building. The call ended up. Call ended hung up, and the guy immediately contacted everybody. It was like be on the watch, these folks are out there. So this is not to like scare you in a place of like oh my God, the world's going to end and nobody's going to have their privacy. I mean, that's probably true, but my point is like it is a tool that bad folks can use just as much as good folks, and it's our responsibility, like your podcast, to communicate to our friends and family. This is how AI is being used. Be careful and also use this tool effectively for yourself.

Speaker 1:

You know, I like that. I like that because I personally have family members that's been scammed by that exact type of fraudulent AI voice that sounds like someone that they know relative, not going to give a bunch of information, is part of the Gen Z crowd. I thought he would have done a better job deciphering it, but since he seems so, I don't know. I love you, gen Z, maybe Question mark. Question mark.

Speaker 2:

I actually like you, let me stop.

Speaker 1:

But yeah, he felt because it sounded like his relative asked him for something. He just vouched for it and gave that information. Ai tech when it comes to fraudulent scams, they're getting more sophisticated, so you have to triple verify, quadruple verify to Jace's point, and he's not doing this to scare you. Or just, you know, manifest Skynet or iRobot situation for happening okay, and you know, but we do have. Or I robot situation for happening Okay, and you know, but but we do have to be prepared, we have to be alert. Okay, we can't just live in Lala land. We just cannot live in Lala land.

Speaker 2:

You can't you know you cannot live in Lala land. I completely agree. One little small like interjection, real quick. I just was reading an article about iRobot and how it was ahead of its time. Imagine if iRobot dropped around the same time AI dropped. It would have been the biggest movie. But at that time everyone was like, yeah, robots, I get it, ai, I get it. But now it's like, oh no, it's getting a lot closer to that, especially with Elon building robots and wanting to put them into our homes, just like a lot of our sci-fi films have happened. It's interesting how the real world matches and catches up to our science fiction movies and books and comic books and stories. We're getting closer, man, we're getting closer, and I'm both excited and a little scared at the same time.

Speaker 1:

Of course You're going to say, oh, jason's, these contributions are all scared and excited. No, we're human beings, scared at the same time, of course. And you're going to say, oh Jason's being, he's contradicting, so I'm scared and excited. No, we're human beings. Okay, as much as I love things to be simple, me, I consider myself a cautious optimist.

Speaker 1:

I'm you know, I'm looking forward to AI's potential, but I'm also aware of its risk as well. I mean, you gotta see multiple sides of the subject or the issue just to be an expert. You gotta say, oh, ai is all great, it's gonna do all great things. No, just like human beings, we add a spectrum around it from very good to very bad, and then it's like it leans right in the middle so that AI is going to reflect on the creator. Okay, if the creator has bad intentions, they're going to do.

Speaker 1:

I was going to give a very, very social example. If you want to make the AI racist, I think it would be stupid, but go ahead. I want that person to be called out for wasting this time and everybody else's time. I created some racist AI that says, oh, I can't recognize this person, it's too dark. Oh, this person is light, I can recognize all the features and there's been, sadly, studies which it needs to be worked on because I think human biases have been transferred. Can we correct that through ai or we have to correct that? I think I've corrected ourselves before you even get to that conversation. I don't know what's your take on that so.

Speaker 2:

So, like there's a couple things here. Uh, when you're talking about a large language model like chat, gpt, where you're feeding in a bunch of information, like there are apps that are, there's video AI apps that are creating videos now based upon word prompts, what are those videos based off of? Those are what you, what do you? You have to train. If you're talking about a large language model and AI, you have to train it on something. So, from a video point of view, you train it on, like YouTube. Well, what are the demographics of YouTube? I don't know, but let's just say that the majority of folks who publish on YouTube could be male, for instance. Let's just say 70% are male, just hypothetically. Then there would be a percentage bias of male-created content. Whenever your video AI has learned what it needs to learn, there's a certain inherent bias in that. That doesn't mean that every individual people inside of that are intentionally being malicious or biased. It's just that when you look at all of the data, the millions and millions of videos that exist out there, the small subtleties that exist in this large aggregate of data is going to slightly shift a decision one way or another. So that's with video, that's with the internet.

Speaker 2:

I was talking to somebody about Wikipedia because this gentleman is working on this really fascinating. He's working on trying to figure out if we were to send a message into space out of all of humanity's thoughts and words, what would that look like? He's part of interstellarorg. It's fascinating. And he was like well, wikipedia is a great place that most people start right. And so let's say, you just had a large language model, ai, based upon Wikipedia. You think that would be a pretty good idea. It has a lot of information in it. Apparently, the number is staggering where it's like 90% of the articles written in Wikipedia is and largely white men. Not that that's a problem inherently, but when you think about if you train some sort of model based upon that information and you ask it a question about diversity now, anything about black, white, anything about anything at all related to gender or race the answer that it comes out with could be bias, and we are still trying to figure out the best ways to make sure that the data that we put in, even if there might be swaying one way to to level that playing field out. So I, I there is a problem with the data that we use and it's and if we?

Speaker 2:

The thing is like, if we want to trust AI I want to. I've written papers with it already. As an adult I've written papers. Students use it all the time. We begin to use it all the time. The next time, in a couple of years, when you talk to Alexa, when you talk to Google, when you talk to all these things, they're all going to be using these systems.

Speaker 2:

And the more that we trust these systems that have this bias, we might be confirming it ourselves and causing even more on the state level, but in the us, from a national level, we're not doing enough to understand how it impacts us on a state level. Only certain states are really going out of their way to really dive into it, california being one, at least in the past year, has put in more legislation related to ai than I think that any other state in the united states, which is great for the people. But high-tech companies get annoyed by that because they're like I want to do whatever I want. Well, the point of having all this is to make sure that you can't do whatever you want, so that we're protecting the people that are using it. And so this is going to be this tug of war back and forth. It's going to be tough.

Speaker 1:

As always has been always when major companies could be a tug of war with government and businesses with regulation I mean we'll go back to history with this to a steel. That was a big tug of war, all right, I mean, there's precedent for all this stuff that's not going to get new people. Well, there's an anti-fault. As I'm talking, there is, uh, a victory that I'm happy about that.

Speaker 1:

The judge said and I checked what state. But one of the judges said oh, google is just something illegal. You have monopolistic power. You need to really rule on that. So I said good, this is a step in the right direction, because my problem with the court system is it was biased, not because of Democrat and Republican, it's because it's being too pro-corporate. Corporations gain so much witness, and that was my concern with the courts. Yep, exactly it's all about, I was gonna say, creatively, green cheddar, figure that out, green cheddar, green cheddar, yeah, green cheddar. There you go, it's green. That's for us, green cheddar.

Speaker 2:

I thought you said it's green, oh you said green cheddar, I thought you said greed cheddar. I was like that also works. Yeah, you know what that works. You know what? Yeah, let's go with that Greed cheddar. Yeah, see Greed cheddar.

Speaker 1:

You see, this is why he's a great public speaker. You see, he's wordsmithing in real time. People, this is evidence right here. Clip it, flip this greed Cheddar.

Speaker 2:

We gotta coin that word when Kendrick Lamar drops a song called Greed Cheddar in about three weeks, y'all know the reason why. Y'all know the reason why it's cause it started off on politically high tech. That's where it started off and actually great, my credit.

Speaker 1:

I'll go soon. Jason Brown, okay, let's not. I guess I get all the credit. If I was doing a solo episode, then yeah, yeah, I will fight for every single credit. Maybe then some.

Speaker 2:

Trademark it Make sure I got t-shirts. All the things Greed there you go.

Speaker 1:

I'm an anti-corporate. I generally side with the left on that issue because I think they're more on the money. Not that I'm against corporate existence, I'm not against that. I'm not that radical. The far left have an issue with that. But the rational, the rational right, which is a lot of them, and just like the right, there's a lot of rational ones, even though you know the media likes to pick on the boogie people because they're more entertaining and that's what generates attention and clout and eyeballs, not the rational people. Rational people are boring. Who cares? Let's skip to. Oh, what did Trump say about what? Oh, let's pay attention to that. Oh, what the AOC said, oh, let's pay attention to that. I don't know. Forget the normal adults. They're boring. They're a person asleep. We're not watching the National Geographic or Discovery Channel. We want some entertainment. We want something to get people pissed off or to react, but um so to that point, though, real quick.

Speaker 2:

Like, could you imagine, like if to that point, I completely agree with you, right? I completely agree with? Like that's the way that news usually works. Could you imagine if ai were trained just on the way that news tells us information, right, and if AI thought that that was the right way to do it? Right, then it would think that then it would have a structure within these large language models that would lessen the importance of quality information and reinforce the idea of this, uh, of the sensationalizing the news and so like. Even something as simple as that and thinking how ai could be incorporated is a little scary, because it would get even more efficient, it would get even more. We get even better at doing that thing that we are.

Speaker 2:

You and I both don't. Like I want to be able to turn on the news and be able to be like okay, what are the? Are the? What are the? Like, don't just focus on the murder, don't just focus on the hot topics when it comes down to politics. Like, give me the real stuff. Like, what about the educational system in the? What about the educational system as it relates to Pennsylvania? What about New York? What about California? What about Florida? What about Michigan? What about that? No, that's not even going to be discussed once over the course of a week on these news channels, because that's not sensational enough, and so I think it's if. If there is ever an ai news channel, I I hope that it's done correctly, I hope that is not just like the rest of them and it actually gives us the information.

Speaker 1:

Oh, oh yeah, well, I'm sure there's gonna be a sensationalized version. Let's just be real. That's gonna be not just like the rest of them, and it actually gives us the information.

Speaker 2:

Oh, yeah, well, I'm sure there's going to be a sensationalized version.

Speaker 1:

Let's just be real. That's fine. Just just ignore that. People stop negativity. All right, I almost I was just about to swear, but I'm trying to keep this as clean as possible.

Speaker 1:

Just just just go to the AI news. That's going to give us brief, digestible information. That's just that we consume in maybe two to five minutes, something like that. Well, I could give you links to articles that are focuses on facts, non-sensationalism, like a new paper. I'm going to put the link on the description. Trust me, I got stuff Associated Press paper. I'm gonna put the link on the description. I trust me, I got stuff associated press. I'm gonna put the link right there. Pay attention to more of those, okay, and those are ones that will get you. And also, I got other news sites as well, but I'm gonna link it down. I've got time to go follow them. I'm gonna put various links on the description up.

Speaker 2:

So, so one of the things that you, yeah, go ahead one of.

Speaker 2:

One of the things that you just reminded me of, because you kept talking about saying I got this citation. I got this information from the Associated Press. That's what you're talking about. Here's the thing, though. When it comes to ChatGBT, I've begun to ask it where did you get this information from? What is the link where I can find more information from? And I feel like more people should be doing that.

Speaker 2:

If you're looking up any of the topics that you're talking about on your podcast, if you're using AI tools, especially chat tools, and it pumps out information just like we do on a normal day-to-day basis, could you please provide me the links and resources to where you acquired this information? Sometimes it can't do it, and when it can't do it and it can't confirm it, then I don't think it's true. I want to see this website. I want to read the article from the Associated Press, because if you pulled it from somebody's, if you pulled it from Billy Bob's basement authority information site, I'm going to be like, no, I don't want Billy Bob's information. I would like to have a credible source for the information you're feeding me, and that's still a problem when it comes to AI. So, yeah, ask ChatGBT where you got the information from.

Speaker 1:

Yeah, just like you want to question humans where they get their information from. We got to apply the save when it comes to AI, because AI is not perfect at all and I'm going to keep saying that and I don't think it's ever going to be. It's going to be great, but not perfect. Those are two different things. Okay, yeah, that's great. Yeah, just like you don't trust human beings, you just think, oh, you know, let me make up something really, really wacky. Something that's so bizarre that you'll know I'm spreading misinformation right away. Something that's so bizarre that you'll know I'm spreading misinformation right away. Kamala Harris is a white man.

Speaker 2:

Everybody. That was not a factual statement. Please do not quote this at any moment. This has been a public service announcement. But like, like that is like, like you're right, right, like that is a, what is the source from that? And I think it's going to become even harder unless it's integrated into the systems. Like, honestly, I'm surprised that chat you know what chat gbt does do it sometimes. If you ask it a a question about an academic style question, it will give you a link to the things that exist, but it doesn't do it every time. I think that we need to train ourselves as human beings to do that more often, because what you just said obviously is not true.

Speaker 1:

Obviously is not true, but but if you want to believe that, make me feel good to laugh at you then you know, go right ahead, go right ahead. You just feed my ego. You're feeding my dark side, by the way, um, but you know, listen, let's be my responsibility, my self-imposed responsibility, to be clear. Let's try to combat it against misinformation. My responsibility, my self-imposed responsibility, to be clear, is to try to combat it against misinformation, disinformation. If I have done it myself, the next episode I'll correct that. So it's possible. When I said about that in the previous episode, that's incorrect. My apologies. That's why I included it at the description too, to make sure that my correction is acknowledged and recognized. Okay, because I'm going to make mistakes too, and, of course, check what I'm doing, cite what I'm saying. I want that and I want to have a conversation. Maybe it's something that I missed because of I didn't have time to break it up or I missed the situation. That is a possibility. I'll be honest. I'm putting the situation. That is a possibility. I'll be honest. I'm putting the mature cap.

Speaker 1:

Now, you know, I love to have fun and come up with craziness. That's kind of my thing. But you know, just feel free. Don't take everything I say too hard, don't, don't. I expect you to be smart, especially my listeners, my listeners and viewers. I want you to be smart, my listeners, my listeners and viewers. I want you to be smart.

Speaker 1:

Okay, even question me, correct me. If that person's too emotional, is she? Whatever gender? They identify as they are immature, okay, that's all I'm going to say about that, and that reveals a lot about the person, and I personally know people that if you correct them, they take it as an attack. They can express okay, you're talking about that later. It's okay, we push it aside, but I don't mind being corrected right on the spot. I have been corrected right on the spot. I said oh okay, yeah, that's actually true. I don't know how to write on the spot, because I care about spreading the truth more than feed my own ego. It'll feed my own ego. I'll speak all kinds of crazy stuff and defend it to the death, and that's not my, that's really not my style, right there. Okay, so I'm just going to be very transparent with that, but I don't want to take all, I don't want to take too much time in that conversation. That's just why I said because of misinformation, disinformation, truth and integrity is far more important than my own ego.

Speaker 2:

So one more thing I want to say about this before we hop into something else is like, when it comes to using chatbots and this is one of those things that I think that again, I completely agree with you Correcting individuals or having a con, maybe the better way of saying it is having a conversation with an individual and opening up a dialogue is the best way to at least engage with some sort of like understanding it from a perspective and make sure that the facts are correct. It is not a correction of being like you're right or you're wrong necessarily, but you could be like hey, I, this, I, I have a, I have a reason to believe that you might be incorrect. Conversation, not a finger pointing, and and assassination, necessarily when it comes to that. But if you bring it back to the conversation about AI and chatbots and chat GPT, when it comes to correcting, I don't think we do it in a good enough job. One we talked about citations, which is important, right, you've gotten data, which is important. Right, you've gotten data. But if something is not correct, I like to tell ChatGPT that here's the reasons why you're not right. This is actually I think you said this earlier. You said you have two conflicting statements in this particular scenario, and I think that that is flawed.

Speaker 2:

Now, depending on whether you're paying for ChatGPT or not, it's not supposed to be capturing and saving information up to the their servers, the clouds I'm sure all of it is anyway, but like they're not supposed to if you're paying for it, from my understanding. If I'm wrong about this, please let me know. I would love to know that if you pay for chat GPT, they're still collecting all of your data. I don't think that's the case, or you can opt out.

Speaker 2:

Still, I still correct ChatGPT because I'm hoping that my little comment, my little suggestion, my little adding shift of perspective, will help make sure that the integrity of what people are using is intact. Will I have any impact? Look, I have no idea, but I certainly hope so, and I think that training ourselves to do that whether it's AI, whether that's a human being, whether that's online, whether that's a conversation in person, whether that's with your family members, whether it was with your friends that kind of approach of politely and respectfully disagreeing, even if it's a machine, is the best way to go about making sure we're on the same page. So yeah, dude, I completely get where you're coming from.

Speaker 1:

All righty, then I think, what's the next topic here? I think we touched very little bit on it. Dei, I won't be surprised if we disagree on something I'm expecting to, which is fine and reminder for my listeners and viewers disagreement does not equal hate or an attack. Okay, I don't know why I feel like I have to keep reminding you of that, but based on how tribalized things are, I don't want you far radical left or right morons come at me, but if you do, I will not censor you. You want to know why? Because I want to document your stupidity and hideousness to the internet. Yeah, I'm protecting free speech, not for the reasons you think. I want to make sure the public square know how hideous you are, how doofy you are. Okay.

Speaker 2:

Appreciate that. I appreciate that I appreciate that, but that's about DEI.

Speaker 1:

So, dei, I've been very critical of it, not because of the idea. The idea itself is noble, right, it's a good goal. The problem is the execution and even some of the implementations. And I've been part of one of the boards and I've been kicked out of it because I am too vocal and I kind of don't care. I kind of foresaw me being kicked out because I'm not afraid to challenge someone.

Speaker 1:

If you've seen the very old movie listeners especially the older ones of the 12, think angry men I was that one juror that disagreed, questioned everything, that gave a hard time for the rest of the 11 that wanted to go home and get it done and over with. But sadly the opposite happened. Unlike him convincing all of them, I got thrown. Oh, I got kicked off the group because they. I just think I was being too vocal. I was proud I was doing it, because we always keep talking about fluffy stuff like holidays and religions and all that. That's great and all, but we got to talk about what is the root causes of what's causing certain biases in the job, and they didn't want to touch that. So I said what's the point in this group? So that's why I got kicked out. It's short.

Speaker 2:

So a couple things, though. One I'm sorry that happened to you. That sucks, that's stupid. And to have somebody retaliate against you for voicing your opinion, it never feels good. It never feels good. So I hope that you do find a spot where people can appreciate your vocal ability, the. So I hope that you do find a spot where people can appreciate your vocal ability. The next point I love the movie 12 Angry Men. By the way, look at the original, the black and white joint. Go back and watch the original. It is so well. I don't watch a lot of black and white movies, but that movie I will watch any time, any day. It's so well done.

Speaker 2:

Now, okay, so the problem, the problems that you articulated, don't necessarily scream to me a problem with DEI, up until the point you said that they don't want to go past being superficial. The concern that I have is this If a group were to let me back up, there are a lot of different elements to DEI. It's basically diversity, equity and inclusion. So if you were brought onto this team to say, because let's just assume that it was all white, or predominantly not of somebody like you, and they wanted to bring some other folks in to gain additional perspective and you were brought into that fold. That is a very noble gesture for them to do, to begin to do pieces of this. But the problem is that if you only do that, if you only just say, no-transcript somebody who's international onto this group and that is the extent of the work that you are trying to accomplish You're going to inherently create conflict because you're not actually allowing you.

Speaker 2:

If that you have to allow people to have conversation, it goes back to this dialogue aspect. It goes back to going further than just acknowledging the fact that, yes, this week is Ramadan. Okay, that's your diversity moment. No, it has to be a deeper understanding of like. During this week, we're going to be shifting our hours completely and also we're going to be making sure that we respect this. And here's a little bit more about the religions that you may not understand as it relates to this particular holiday. And not only that, we're not going to put all the ownership on the individuals who are here who practice that. We're going to actually have some folks come in and coach all of us on pieces of what this might look like. We all know what Christmas is right Yay, okay.

Speaker 2:

Well, here's another aspect that you might need to be paying attention to. It's this intentionality of creating conversation and dialogue and not just relying on folks to just say I've added a blank person to the group. It takes a deeper understanding to that and to your point and to your issue when you're in this group and they weren't comfortable with your forward approach, that one that could just be the identity of any group. Like, not every person is going to fit with every group, and that could be the case. The other piece it could be is the fact that they are not familiar with people from different backgrounds having been able to vocalize their opinions in different ways, with people from different backgrounds having been able to vocalize their opinions in different ways.

Speaker 2:

And that does not mean that you accept disrespect or you accept a minimization of people's integrity. No, that's not. You don't necessarily have to break your value structure to do it, but you have to be able to say you know what. You have to at least take a step back and be like are we having, if we've invited people who speak differently into the room? Well, maybe the rules in which we speak need to change to accommodate that. But if you don't even think about that as part of the conversation and you think that things should remain the same, but we just get to have a little bit of politically high tech inside the room because that makes us feel better from a DEI perspective. That's when it gets flawed. Dei diversity, equity, inclusion traditionally has been superficial, but the underlying value system and structure that is there is incredibly important. It's just been executed in the wrong ways in many places and has since, because of that, gotten left a bad taste in a lot of people's mouth, and I personally hope to rebrand all of that, to re find a new way to have those conversations and restructure all of those pieces, because that's the only way that we can get back to being like you know what.

Speaker 2:

Let's get Elias in here. No, no, no, no. I understand the last time. Let's just make sure we get him in here and we have a great conversation. No, no, no, no. I don't like the way that this is. The format of the meeting needs to change. Let's try it. Let's just try it. Let's have a conversation afterwards. How did you feel about that, elias? How did you feel about the rest of the crew? How do do we need to adjust about this? What good the merch of this. If you're not willing to do that, then that's the problem, and I hope to change that.

Speaker 1:

So that's my perspective and I think that's a good time to wrap it all. I can add so much into it, but you nailed a lot of the you already. You already nailed a lot of the points. So I was gonna leave it at that, because I was not just, I was for my mouth, I was forward on pushing tasks and all that, and I respect a lot of people's ideas. When it was good, I said that's good, let's do it. But they just pay attention to a couple of times that I was very vocal. I even challenged the majority, because you can't always be afraid of majority.

Speaker 1:

I know that's easy to do, but you know as you know, especially if you want to study America, you'll deter the majority. That you know. As you know, especially if you want to study America, you'll deter the majority, that is. That is a thing. I don't care if you're black, white, multicolored, whatever the heck you are, it is a thing. But you got to push, especially when you have a good idea. That's in your soul, in your gut and that's a little bit of spiritual part of it.

Speaker 2:

If I could, if could, I did 60 seconds. I talked for a long time about it and I was commenting about you and I don't want to be disrespectful to your position and how you feel. How did what did I say? I want to hear your response of like what, truly how you felt and if I connected with some of the things that you felt, because I, never, I, we did not, I didn't acknowledge that or you didn't necessarily say that I connected with some of the things that you felt, because I, never, I, we did not, I didn't acknowledge that or you didn't necessarily say that I connected. I want to see if we aligned or if I was off and how that felt for you.

Speaker 1:

Just like 60 seconds oh yeah, I mean, in terms of d, I think we have a very similar understanding. It's the execution. That's the problem, the superficiality of the execution. I think we actually spot on're, spot on. I thought that we would be, and yeah, but that's the main thing and the word worry for someone to be vocal.

Speaker 1:

And you know you have to deal, you have to be uncomfortable with, sadly, some debates, lively debates and conflict. You know now, conflict is bad. I mean, I used to be programmed to think all conflict was bad. I used to be overly afraid of that. But, deep down inside, I actually like conflict. It's actually a good thing. It's actually exciting.

Speaker 1:

Yes, not the gunshots and that kind of crap, not the fiery war conflict. I want to prevent that kind the verbal conflict. It's interesting, you can have that all day. Yeah, it's interesting, you can have that all day. Yeah, yeah, verbal, keep it verbal. Verbal gas, keep it verbal. Like I said, I need to model that behavior as well. Disagreement doesn't equal hate, you know, it's just. We have just two ideas that are just in the air. You know, either mixing or fighting each other. Well, it depends on the conversation, right?

Speaker 2:

so I wish I had time to go deeper into that.

Speaker 1:

But they've got to be ready for that. We've got to make it a natural practice in the jobs. I think that's the only way that's going to happen. It's going to take time. A lot of corporations they just did it for the money, because they thought it was a cool thing, which that's why all the criticism as well.

Speaker 1:

That's why they ax criticism as well, and that's why they asked which. I saw a comment, but I thought it was crazy. I said no, give it two years, they're going to ask them out. And a lot of them were gone. I said, no, they're not doing it because they care about your well-being and they do it because they want to look cool.

Speaker 1:

It's all virtual signaling and I have to be honest, I kind of agree with the Republicans on some of the criticizing the DEI. Not that they want to bottle it together. That's why I disagree with them, so no, I just think it needs to be rebranded, remodeled and all of that Rebranded yes.

Speaker 2:

It needs to have a new word. It needs to have a new approach. It needs to have some sort of consistency with the way that it's actually implemented, as opposed to finding people around that just claim to be experts in the space and then giving it a bad name and ruining the experience for other people. I'm not trying to say that. If you feel as though that I just attacked you, then maybe you need to question how you have implemented DEI in your own place. If you've done your homework and you're able to implement it correctly and do it the right way, well then you probably don't feel attacked from what I just said, and it's never an attack, it's a conversation. So holler at me if you want to chat about that more.

Speaker 1:

Yep, and sweet about that, let's do that. Shameless plug-in we have shameless plug-in time. Go to the yes, that's with the jasonbrownwitheecom, that is T-H-E-J-A-S-O-N-B-R-O-W-N-E.

Speaker 2:

Brown brownwitheecom. The one thing I want to say real quick is that if you don't want to go to the website, that's fine, go to YouTube, look up Jason Brown TEDx. It's a possibility, it's a privilege, it relates to DEI. I would love if you just watched the video and left one comment and shared it with a friend. If you want to do one thing, that would continue the conversation, check that, tedx. Jason Brown Possibilities of Privilege. We'd love to see your comments.

Speaker 1:

Yeah, I watched it. It was actually great because I always thought privilege could be both you earn and, to some extent, what you're born with. I always thought of it that way.

Speaker 1:

But even before I'm kind of bragging about it because I'm kind of ahead of everybody on this one. I said no. I said kind of bragging about it, cause I'm kind of head of everybody on this one. I said no. I said privilege is always a bad thing. I said you know, I even said, look, I look I have Latino my DNA. So yeah, it is a Latino privilege to take to take the heat. Yes, yes, that is good with a tan and all that. Oh, yeah, I call that the Caribbean privilege. I want to be extra inclusive in there. There you go.

Speaker 2:

There it is.

Speaker 1:

Take that European sake. All right, let me stop. Actually, sir, you could take the sun as well. I'm going to credit you Italian people. So attack your DNA, not me, okay. Life is unfair sometimes. Stay with it. You can't get everything. You can't get everything. I appreciate you having me, man.

Speaker 2:

I appreciate you having me man. This has been fun, it's been wild, it's been a wild crazy ride and I appreciate it.

Speaker 1:

Yeah, that's what I plan to do. So, like I said, go to thejasonbrowncom. For some reason, I get a tongue twisted while I spell it real fast. I'm not that professional at that part yet. Expose, expose, flip. If you want Make memes and trolls, that's fine. Enjoy a good laugh if you can do it well. And go to social medias as well for more conversation.

Speaker 1:

Check his TED Talk. Just check it, especially for those of you who think privileged people are a bunch of Karens. Okay, I know some of you fools think like that. Okay, if you're smart, don't admit it. Just watch the video. Be educated. Just pretend like you knew that you knew privilege could be a good thing. Just put it front, even though I might expose you. That's all I'm going to say about that. Just check it out.

Speaker 1:

He's actually a great speaker. He's a real deal. A lot of speakers I get bored with quickly. Not this one, not this one, not this one. I am a difficult person. Proudly, I fall asleep on a lot of speakers, sometimes in their face, because they just sound so dry. How the heck can they become a speaker and just sound this dull? But hey, he's revolutionizing the game. I think I have a couple of people on my lineup that who could TEDx the speaking. You know speaking engagement experience OK, and I could say it was a straight face. So, with that said, from wherever you're listening to this podcast, you have a blessed day, afternoon or night. Thank you.

People on this episode