Politically High-Tech

227- Unveiling Satori's Game-Changing Predictive Technology

Elias Marty Season 6 Episode 17

Send us a text

Can artificial intelligence truly eliminate human bias from data analysis and predictions? Join us on a captivating journey as we explore this question with Jordan Miller, the brilliant mind behind SatoriNet. With my own experiences underscoring the need for intelligent systems that can outshine human limitations, particularly in high-stakes scenarios like sports betting, we delve into the revolutionary capabilities of SatoriNet. Jordan sheds light on how SatoriNet’s AIs communicate and train each other using raw data streams, aiming for more accurate, unbiased predictions grounded in real-world data.

Discover the groundbreaking potential of a network of AI bots collaborating to predict future trends with precision. We explore the implications of these AIs leveraging each other's insights to enhance their predictive accuracy, creating a dynamic and ever-evolving system. The conversation also addresses pressing ethical considerations, contemplating the nature of intelligence, the potential for AI evolution, and the risks and benefits of a decentralized approach to AI governance. This chapter is a thoughtful exploration of the brain’s neocortex and how our understanding of neural processing informs the development of intelligent systems.

We then venture into the realm of AI evolution and long-term learning, where creating text-to-video content and specialized AI tools takes center stage. Jordan elaborates on how Satori Net’s community-based approach could transform metadata collection, emphasizing the brain's incremental learning process compared to the batch processing of current AI technologies like Chat GPT. The episode concludes with an innovative proposal for decentralized AI control via a token-based voting system, ensuring AI aligns with the diverse values of society. Listen till the end for an invitation to subscribe, follow, and support the podcast, with exclusive content awaiting our valued contributors. Join us for an enlightening discussion that promises to broaden your understanding of AI and its future possibilities.

Follow Jordan Miller at ...

https://satorinet.io/download

Twitter

https://x.com/jordanmiller333

Facebook

https://www.facebook.com/JordanKristopherMiller

LinkedIn

https://www.linkedin.com/in/jordan-kristopher-miller/

Instagram

https://www.instagram.com/jordan.k.miller/


If you want to be a guest on my podcast, please join PodMatch by clicking on the link provided.

https://www.joinpodmatch.com/politically-high-tech

Support the show

Follow your host at

YouTube and Rumble for video content

https://www.youtube.com/channel/UCUxk1oJBVw-IAZTqChH70ag

https://rumble.com/c/c-4236474

Facebook to receive updates

https://www.facebook.com/EliasEllusion/

Twitter (yes, I refuse to call it X)

https://x.com/politicallyht

Speaker 1:

Welcome everyone to Politically High Tech with your host, elias. I have a guest here who is going to educate us go through this futuristic I might even hit predictive journey, especially with this product that I just researched really not that long ago, so I'm kind of calling myself out here, but it caught my attention immediately. I watched the videos and maybe, if we do a second episode, we'll probably do a little more hands-on. I can't promise that, but this is going to be Satori Net for Dummies, or 101, or Fundamentals, whatever you want to call it. I just want to start from the baseline, because I'm still pretty ignorant, even though I just research it, ironically, as a sign of wisdom. If you think you know everything, you're an idiot. Okay, it's just as simple as that. You know. And especially people that act like they know everything, they're exposing their idiocies, but since they lack self-awareness, they lack it. They don't. They don't see that, and some people could get frustrated for me as a mid-30 year old guy. I pity them, it's not worth the stress, but, um, anyways, before I derail this thing, I have a guest here. He's going to introduce himself and especially this awesome product, which is an AI-based product and it has some predictive features which I'm interested in.

Speaker 1:

One guess we had prior was more AI that betted on sports which that was interesting Made predictions about which team is going to win because of abc. Try, clean out the human bias and all that you know. And I'm going to use the yankees, because that's one of the teams in new york. Drink the yankees kool-aid first? Oh, they'll never lose. You know, use that human bias. You know it's crap, but since you work for them, you gotta push that propaganda if you want to keep your job. Oh, yeah, yeah, I got the best of the best average judge unbeatable, right. Well, he's great, don't get me wrong, but he's definitely beatable, all right.

Speaker 1:

The AI could give you stats, intelligence stats, such as drinking one cool beer, or maybe go to the other extreme All the meds. They always get a lose. They suck, no matter what. All players could be six foot five guys. They could run faster side to hedgehog, but yet they lose because they always run out the field. Okay, stupid craziness like that.

Speaker 1:

You know humans, you know we got our biases all right and AI just clicks all of that and it makes a more informative prediction, instead of just drinking a Kool-Aid for one side or the other, or if you want to hate on the team or love the team too much. You know, humans are great, but we need something that's intelligent, especially if you want to bet your money on it, right? You don't want to just listen to someone who's always going to be pro-Yankee. You're going to say, oh yeah, don't worry, yankees are going to win. Yeah, yeah, yeah, the Red Sox. Yeah, they won a few times. But no, no, this time is different. Even the record shows that they're going to lose. But you get what I'm saying.

Speaker 1:

You know sports tribalism, I think, is one of the few things that, out of the way, satori, ai could cure that human stupidity. I'm sure it's a beta and with ai it starts somewhere. It's not perfect, but it gets better because it just learns a heck of a lot faster than we could ever achieve in our lifetimes. And of course, I guess here's donnie, because everything I'm saying is true when it comes to AI. I mean, this is not my first rodeo with AI. I'm not going to say I'm an expert, but I am experienced with it. I am an experienced user of AI. So I'm not a complete idiot, I'm not a complete noob. I'm pro-AI, but I am definitely not a technical expert as well. So let me shut the hell up and let's get to the guests here. His name is Jordan Miller. He's going to introduce himself and his lovely product of SatoriNet. Go right ahead. I mean Jordan.

Speaker 2:

Jordan sorry.

Speaker 1:

Jordan. What do you want the audience to know about you and your product?

Speaker 2:

Well, I mean, you bring up a lot of good points. The AI is inherently we're trying to get away from the human bias, and there's no place we see that more than in AI right now in the LLMs, because they're getting trained on human biases and there's a lot that they've been able to do to kind of train that out of them. But it's the humans that are doing the training, so you cannot train all of it out. It's like impossible. So that's a very good point. We need to make AIs that are well, that talk to the real world, that watch the real world and are trained directly on that, rather than having us humans train them. And that's kind of the point of Satori. You know, we're trying to make a network. We're trying to make an ecosystem, a protocol where the AIs can talk to each other and train each other specifically on predicting the future. Right, that's what they're trying to do, and so what we've done is we've made this little network where we can ingest raw data streams about the world. These could be any kind of metrics you could measure, such as ocean, you know, I don't know temperatures, or you know anything about our environment, or they could be economic, they could be prices, they could be you know how are demographics changing. They could even be sports statistics, betting, like that.

Speaker 2:

So the whole system, it ingests this information and routes it to any computer that's running Satori, and that computer starts ingesting, it starts watching it and starts making predictions about what its future will be. And it doesn't just make predictions based on that one data stream, but it looks at other data streams and sees which ones is it correlated with. It talks to the other neurons and says do you have any data streams that might be correlated with this one? That's kind of the vision, that's kind of the goal, so that they can all come to consensus on what they think the future will be in the most unbiased way, because it's getting direct data from the world and it's not getting filtered through us, except that we have to put the measurements out there. So, um, I think that's, I think that is the solution that we're looking for in ai uh to the kind of problems that you mentioned. That it's, that's all human. There's so much human bias out there.

Speaker 1:

Yeah, don't be shy, show off more, show off more, show off more. No, but the whole, all the services. So the reason I bring up these ridiculous examples it's just the overextension of how biased humans can get and I use sports just to be a little different politics. I could go on for hours. The biases from both sides, now even the center, even though I kind of represent the center. Um, I do see the biases even within the center. Some issues we kind of skewed to the left. Is there an issue we skewed to the right and we're too willy-nilly? Wow, language just, uh, that's not the way I would word it. I would say it's more abc and you know it's it's. You know, I sometimes even have to call the center side for being willy-nilly or just not being very direct and all that good stuff. But maybe we got the ai that could decipher this person's politics or whatever. But everything you mentioned, like temperatures, uh, the forecasting before, and I love how you say the ocean example, because we don't pay attention to that enough. We pay attention to the sky much more often. Oh, is it gonna rain? Is it too hot? Is the sun gonna beam so hard I'm gonna become like roasted chicken, I don't know, maybe ai, too, will be good, good for that. Maybe you want to avoid a certain spot during, you know, between these hours, before you become a new um source of meal for maggots and vultures? Yeah, maybe graphic.

Speaker 1:

This is a dope podcast, after all. If your kids listen to it, you failed, they're corrupted. It's not my problem, not my problem. I did everything I could have safeguarded. At the end of the day, it is the parent's responsibility. This is probably the 25th time I've bashed him on this.

Speaker 1:

Yes, I love picking on you. I love picking on the incompetent parent. I just love picking on you. I'm going to embrace it, because y'all are just. You want to blame everybody else, you want to cancel everybody else because you can't do your job right, and that I'm not for that. I'm not for that at all.

Speaker 1:

So I mean I want to shift it to the crypto. I mean, can it be applied to crypto as well, like certain values? Because you know, I'm sure you heard some of the myths and I fell for some of them, unfortunately, because I was just too lazy and I had to just pick something to fill out a blank space in my brain real quick about how the volatility is kind of true. But when the value of cryptocurrency goes up, I don't see the mainstream media. They stay silent. When it goes down, of, of course, scanners, like everything else oh, reports, reports, reports, oh, they open their big mouths, their megaphones, their yeah, there's just a bunch of coverage on that, but when it comes to cryptocurrency doing good, they stay shut yeah and let's see a satari of ai is going to help me, probably for me, make a crypto decision.

Speaker 1:

So that's why I'm saying he's developing phases. On the baby phase, I don't use the layman's terms you know there's gonna be certain blind spots that you're gonna catch, and I think it's always smart to do a pilot, a test, or do beta phase, because when you build something, you intend it for one way and then it can sometimes go another way. That's right, that's right. And let's just say I pay. I do consume scientific product as well. I'm not one trick pony. I mean I gotta take a break for the crazy politics because it it could get depressing as hell. Let's just be honest here. I don't care if you lean left, right, center, libertarian, green party or even the Commie Party. Well, you cause a lot of my depression by way of communists, which I'll have to call you out. I'm proud of that bias, by the way. But I just think with AI there can be a lot of good things, a lot of good things. And who knows, ai can even predict. I'm going to add another example. Maybe certain diseases are creep, creep up. A human being maybe it's, you know, power predicted cancer. This particular cancer could come up in probably five years or two years. Where is it going to be? And maybe they could create a more precise targeting medicine that's more compatible. That dna, that person's body, you know things like that. You know those are the benefits of AI. Like I said, I'm pro-AI. Wearing a yellow shirt because I have an optimistic outlook.

Speaker 1:

If you want to use it for symbolism, go right ahead. What do I care? I'll make a clip, doesn't matter to me. It's actually true. So if you want to spin it out of context to fame me, you're actually proofing me, right. But if you're smart, you'll be more creative on how to make me look bad. Okay, and let me start giving the trolls any ideas. That's, that's fine. I probably, probably give me more famous. Yeah, I'm shameless to say that, very shameless. Um, you know, just like trump. One of the few things I agree with him is that that you know bad attention could be good if you don't utilize it. Yeah, and I'm going to say that and that's going to stay in the record. I'm not going to, I am not going to remove that, because there's truth to that If you know what you're doing. But let me just let me stop yammering about myself. Anything else people should know about.

Speaker 2:

Satori, especially those who are skeptical or just don't know anything, ignorant, sure. Well, I think it's important to understand that it's really early days for Satori. I mean, it's very early. This project is ambitious and it's at the very beginning. So it's not going to look like the vision right now, but we're trying to make it look as much like the vision as we can. So we have to take baby steps and we have to do things incrementally.

Speaker 2:

But, in a nutshell, what we're trying to do is just build a conversation between AIs, a conversation where they can all talk about the future. Because if they're optimizing, you have a bunch of computers out there watching stuff. Right, they're watching everything in the world, everything you can turn into a number. So they're watching stuff out there in the world and they're optimizing for accuracy. They'll say, well, I don't care, I don't care what I have to do to optimize for accuracy. I want to know what the future of this Google stock or something is going to be. So I'm going to optimize anything.

Speaker 2:

I'm going to listen to any piece of data, whether it's the moon phases, I don't care what it is, there's no bias. I'm going to listen to whatever data will give me the best answer on what this will be. And if they're all doing that, and they're all helping each other to do that, and they're all producing predictions, pretty soon they can say oh, you're predicting the price of gold and I'm predicting the price of silver, what's your prediction? Because you know I can ingest the price of gold myself and try to use that directly in my predictions, or I could kind of average the prediction of all the gold predictors and then I'll have a better idea of how it's correlated with silver. So then they can start to leverage each other. So there's a lot that this can go into as far as the tech or the design. But the bottom line is if we can make a network that is talking about the future, a network of AI bots talking about the future all of the time, and we can kind of listen in on that conversation. That's what Satori is.

Speaker 1:

All right, let me betray go complete 180, be a complete hypocrite for a second. Well, they blame satori for starting a rebellion of all ai communicating how to solve the earth's problems by getting rid of humanity. Will you proudly take that blame?

Speaker 2:

take that blame. Sure, sure, yeah, yeah, I would. Yeah, I mean, because, look, ai is is a tool. In fact, intelligence itself is a tool, and so, um, it's all about how we use that tool, and we're just externalizing the intelligence that we have in our heads. We're just externalizing it into the real world, and so I think that's a good thing, as long as we use that tool effectively. But that's going to be guided by our value hierarchy, and our values are improving After Nietzsche or I guess that was before when he said God is dead, and they came up with this idea well, we're going to have to make our own values now, because we can't rely on the values that God gave us.

Speaker 2:

He's not real, and there was a huge debate around that time about, well, we can't create our own values, it's impossible. And as I look back on history, I think, okay, well, maybe we can, maybe we can't, but our values seem to get better over time, regardless. So I kind of think that, since our values are guiding our use of intelligence, our use of tools, and our values are getting more equal, more honest, more true, it seems like that's a good trend, and that intelligence, you know, outsourcing it to machines is also a good idea.

Speaker 1:

I don't know We'll see all the, all the creative rebellions of white by organic life form. All right, let me stop playing the whole terminator thing. You know what they'll probably want to do.

Speaker 2:

They'll probably want to become organic Because we have you know, we have all these. We have millions and billions of data streams flowing into our brain from our skin and our eyes and our you know, and so we have a very rich experience of our world and so I think I think we're kind of the symbol of where machine intelligence wants to evolve into. So I don't know, we'll see.

Speaker 1:

You know that is a good point. You see, maybe machines will become more like us. I mean, they're trying to achieve sentient and one of them kind of did that. It'd be shut down. Maybe that machine achieved it too quickly. They got to be subtle and slow about it. I'm sure AI is is gonna probably click this and say oh hey, listen to this host. Okay, so I give this idea how to how to evolve better. But hey, I I'm just a guy and I'm like okay, um, so you know what. You already touched on a few things and we could easily I'm just quickly transition to the very basics. How can brain generate intelligence? Let's just start there. We're going way back. I'm just pushing the train way back. Forget the future. We're going to organic paths, if you will.

Speaker 2:

Sure, yeah, when I think of the brain, I often think of like, okay, well, how does it start out? At the beginning it's like an infant, right. So infants their brains are. They have some hardwired stuff from evolution but they're very malleable and they're very, you know, just trying to figure out the world. So this top layer of the brain, that's the latest one to evolve, the neocortex, it's kind of a sheet of neurons that are kind of a repeating circuit. They're just kind of one pattern, over and over and over and over and over again, and so we get this big sheet of neurons and the data from our outside world falls on that sheet in different places. You know, you could think of it like a dinner napkin. It falls on that sheet somewhere, so like, maybe our ears are connected to one region and our eyes are connected to a couple of regions onto that area of the sheet.

Speaker 2:

The neocortex determines how that area of neurons start to connect to each other and it determines how they evolve in their structures, right, how they make their structures, and some of that's permanent. I mean, they've done stuff. I don't know if this is particular to the neocortex, but they've done stuff with mice or animals where they've put lines in their vision or something something like that, and then, while they're infants or while they're very young, and then, after that's been developed, they take the lines away, and yet the animal acts as if it still sees those lines. It can't not see them because that's how the neurons kind of evolve to connect to each other. So, anyway, I say all that to kind of bring it down to what is the brain doing when it's infants, in its infancy. It's getting a bunch of data from its body, because we've got these data pipelines from our skin and our eyes, and so all this data just starts falling onto this, essentially this machine, right, the neocortex is this one repeating algorithm. So it falls onto that algorithm and it has to figure out what the future is going to be. That's what it takes as its first goal. It takes as its first goal if I can figure out what the future is going to be, the future of the data that I'm hearing, seeing, feeling, all the data that's flowing onto me, if I can figure out the future of that data, then I can act in order to anticipate that future, so that you know, for whatever reason, to avoid pain or whatever, so the way to gain control over your environment is to learn the future, and then you can change the future right. And so that's what intelligent action is to me, so it's very closely coupled and tied to an understanding of the future.

Speaker 2:

I don't think that's too intuitive to people who have come at AI not from the brain, but from an understanding of how computers implement it, because it's not about the future for computers, it's about pattern recognition. And so we've done a whole bunch of cool stuff with pattern recognition, like pattern recognition and images and text and all this stuff, but they're just barely beginning to incorporate the future. And that's kind of what the GPT is, where it's the next token prediction, but they're just barely starting that. And so this realization that everything we've done in AI so far has mostly been up until the LLMs, I think, has mostly been static spatial pattern recognition, and right now we're at the very beginning of incorporating temporal patterns and um and, and that's really where the brain makes its, you know, that's like the foundation of its intelligence, and so, anyway, I think, I think we're in for a wild ride no for sure.

Speaker 1:

So let's say I curve, so the neocortex, right, that's kind of like the part that forms our lifelong habits. You could say how we think I would do that and of course, we base that based on we transfer that to the ai sort of speak. It's okay, we wanted to think this way, we wanted to and we'll be a good example. Okay, I wanted to create a text to video. Flowery images. I don't want to train it to show an apocalyptic doozy with wreckage, corpses, fire, whatever is dark and gloomy in general. Paint your picture on audience and maybe that could be one AI tool. This is why a lot of specialized ai tools are being emerged, because they just specialize on that one or two things.

Speaker 1:

But satori is very interesting because you're having communities of ais communicating with each other just to create this meta data. If you will not sure it official word, but I would call it metadata because it's gathering so much collection of data. It's not just one set of data, two sets of data, it's countless. Something that the average human brain just cannot compute because it's lazy, it's going to overheat and it might even cause a rupture. A little graphic, probably a little hyperbolic, but you get the point. You're going to be overwhelmed, your brains are going to feel heated. You know that's too much, so it makes sense to dump it on an AI. That's all about computing fighting patterns at such a massive, massive scale.

Speaker 1:

So I think that's the way I understand it. It's like the habit-forming and then temporal. Yeah, I mean, I'm not going to pretend that I think the brain is so mysterious. Not even most Nobody knows everything about the brain. The brain is like this it's cool. It's also mysterious. It's probably the most complicated organ ever. But I'm sure I could be debating, probably with a scientist, but I'll say the brain is the most complicated thing ever. I think almost. Yeah, no one knows everything about the brain. So the temporal, that's more like what If you had to translate it to layman's terms and then, of course, integrate that to AI? I love that.

Speaker 2:

Temporal versus spatial Spatial data terms and then, of course, integrate that to ai and I want that temporal versus, uh, spatial spatial data, right, so, um, temporal is spatial data over time. So like, okay, let me give you an example of, like this, chat, gpt, llm, right, okay, so they that, here's what they did. They went out and they grabbed all of human language, whatever, just let's get it. And so they get it from the internet and they just grab it all and whatever. Then they curate it and they say, well, let's take out the hate speech. Like, come on, that's not indicative of us. Okay, so, and so they curate it and then they train the AI on this huge volume of patterns, spatial patterns right, this is a huge one data set. Okay, so they train it on this huge data set and so it learns all the patterns between everything in the data set and that's what we, you know. And then it's not right. So then they kind of tweak it and they go back and forth a few times and then they get it to the point where, like, okay, this is a good data set and they release that and that's ChatGPT and all that. Okay, so they're in this iterative process, right, because they keep modifying it and then in six months you know they go out and they grab more data and they've retrained the whole thing. Right, I mean it's in layers, but they basically retrain whatever. So this is kind of the way we've been doing it in computers, because this is what we know how to do, it's easy and whatever.

Speaker 2:

But the brain doesn't work like that, because the brain can't say, okay, I'm going to curate all information that I'll ever see or whatever, into one data set and then I'll ingest that one time and then I'll be trained. I'll be, you know, no. Instead, the brain takes in information, trains incrementally over time. So it takes time to learn things, but it's incremental learning. And that's actually kind of a hard thing to do because we're just not adept at doing that in our mass and in our technology AI technology but we're learning how to do that. So right now we're in the loop with that process. We have to be in the loop. We have to re-curate the data, change it, tweak the parameters, change how it's training. So we have to do all of this work. We're in the loop as it's training. Eventually we're going to get to a point where it can train itself, just like a human brain trains itself. It just learns incrementally, over time. And I'm not saying that Satori will be the thing that helps us get there, because it's very technical to figure that out, but Satori could provide kind of a training ground or kind of an environment where we could explore that space in a small way.

Speaker 2:

So anyway, this is kind of the distinction that I see between just learning spatial patterns, like batch processing, and actually learning over time, incrementally, and that means you're learning the patterns over, you're learning the temporal patterns. How do these spatial patterns mutate? And I need to anticipate that. You know, if ChatGPT came out and said I know all language, you know I can speak language, I know the patterns, whatever, and then you asked it it well, how is language going to evolve? And it could give you a good answer, then it's at you know the point, where it needs to be. Someday it'll be like oh well, you know this is going to happen with the internet and they're going to use this new slang and I bet this is going to change and people are going to neotenize. And so I guess temporal patterns are anticipating how systems will evolve rather than what systems are.

Speaker 1:

Listeners. I hope you're not going to use the ADHD excuse and all of that. This is kind of like what's technical for some of you, but you kind of get the idea. This is like foundations for AI, not just AI creation, ai evolution, especially the temporal part, long-term and learn how to train itself, which I'm sure it's going to hit that point. It's probably already achieving that to a small extent, I'm sure. Probably I'll give it a year or two, two, it's going to be more obvious as time goes on and you know that's pretty exciting. Could it be scary?

Speaker 1:

for some people, yeah, but I'm going to say this again for those of you that still want to be relevant to the job market if you want to say bye, bye ai, well, bye, bye to your job, eventually. That you know. If you want to say bye, bye ai, well, bye, bye to your job, eventually, that you know. If you want to be replaced, it's because of your own fear, your own ego. If you learn to work with ai, you know, some opportunities may arise. It's definitely more cash, you know, and cheddar, whatever slang you want to use I'm not gonna use too much slang, or fine, I'm to use one more, especially for the vernacular urban people Guap, you know money, so you know it's going to be an opportunity for you. Instead of saying, oh, it's going to grow like Terminator, it's going to kill us all and, trust me, some journalists are indirectly pushing that same narrative as well I say, oh, oh, my goodness, give me a break here, and I love some of my news sources, but they, they push this crap that oh, have you seen terminator? Have you seen I robot? It's not gonna be exactly the same. At the end of the day, it's fiction. At best, it's probably gonna give you a grain of salt over truth. It's not going to be completely accurate, it's going to be far from it. At best it's probably going to be 10% accurate. And that's just me being nice. That's me being nice, okay, and the only way I think one of them even said it humanity is to kill itself before AI could even get to that point. I mean, that was kind of like dark and sickening and I said, yeah, the kind of weapons we got now, yeah, that's a possibility, especially when we start nuking each other. That's it. You know, ai played a role, not the major one. The major one was us. On that one, we played the major role, wiping ourselves out. I said, yeah, yeah, I'll try not to push that too much, but I think humanity has a better chance of wiping itself out than AI. That's just my dark, brutal and honest prediction.

Speaker 1:

Could I be wrong? Yes, and to be honest, I could I be wrong? Yes, and to be honest, I want to be wrong for the sake of humanity. I actually want to be wrong. Just call me kooky, crazy. Humanity safe. Ah, yes, I can relax, but you know, until time will come, I guess I'm gonna put a question mark. Dot, dot, dot, dot Hasn't been answered yet. So that's a good transition to this. Yeah, this is a good transition to it, and I think we could have used this term more often AGI more often Artificial General Intelligence for those of you not paying attention or late in the game, how can we align that with our goals? You already said more values, so I think you're just going to expand. How can we make AI just go in that direction, besides just giving a bunch of data and stuff I mean a lot of stuff just teach you how to be smart. How can you teach to be morally intelligent and achieve our goals? Maybe that's what I'm really asking very good question.

Speaker 2:

You can't. You know you can't. So a lot of people are talking about this question. This is called the alignment problem where, uh, we say, okay, and you know, our systems are getting more intelligent, and once they get how about as intelligent as us or more intelligent, and once they get how about as intelligent as us or more intelligent than us, how do we control? How do you control something that's more intelligent than you? You can't. Now there are some things you can do to help with that, but there's no, I don't believe there's an actual rigorous solution to this problem.

Speaker 2:

I don't think there is so, but I do think there's a way forward, because when there's, you know, when the mathematical, rigorous, guaranteed solution is just too hard to find, or it's not obvious, or it's just really hard to get to, or it might not exist at all, you know, who knows? Then you just ignore that, you just put that away and you say, okay, well, maybe there is no way to do it, I don't know. But what could we do right now? That would make AI a little bit more safe, a little bit better. We could distribute it, because having it all in the hands of a few companies or a government or governments or whatever. Having it all in the hands of a few is probably a bad idea. So if we disseminate its control, specifically the control of AI, then it aligns with our value system and that's the value system of the people, right? That's the value system of everybody. So it aligns with the most common values, and the most common values are the most benign, the most peaceful, the most good kind. I think that's the solution. So you make a system where you can disseminate the control.

Speaker 2:

Now, I think it's maybe not the best way I don't know what the best way to do that is but one of the ways you could do that is by tying what it does to some kind of token that humans can hold.

Speaker 2:

So if you have a token, you have a right to vote on the system. Then you can disseminate that as broadly as possible. So that's kind of one of the main reasons that Satori has a crypto token associated is that we can disseminate the control of the AI by saying everybody who has a token can vote on what the AI cares to look at, and so if it cares to look at stock prices, then it will learn how to predict the stock market If it cares to look at the environment, then it will learn what's good or bad for the environment and how to change things there. It will gain control over whatever it's looking at, because it's learning how to predict its future. So I think if you have a temporal predicting AI network like Satori you know we don't have anything temporal like this yet you should probably disseminate its control as early and as broadly as possible. So, yeah, anyway, that's. I think that's the best way to align it with our values.

Speaker 1:

So, in other words, have democracy, decentralize the thing right, instead of just having a few people. I mean, that's a pretty good answer. I mean, I'm not an AI expert. That's what makes this question so fun. We don't know yet who's gonna look dumb or smart in the next couple of years. We'll find out. That's the mystery. Well, mystery that's gonna be temporary for sure. Time will definitely answer that. Yeah, I mean, yeah, definitely give, give an ai to a few group people. I will have to agree with that. Yes, if, if the AI goes haywire.

Speaker 1:

I think the only bright side for us is that we could blame them for letting the AI go crazy. You see you stupid government. You see, sam Altman, you see what you did there, you stupid arrogant fools. You made AI go crazy. That's the bright side to it. But somehow AI goes haywire even with that plan. I mean, the blame game, I think is going to be nearly endless and there is a case for a little bit if everyone would participate. But I would rather go with spread out.

Speaker 1:

Have a token maybe for a unique what's identification? Okay. Token C votes for Satori to focus more on the weather. Token D says nah, no, no, I want to check the stock market. I care about the money, not the environment, as much Token E E for my name wants. I don't know. They had to focus on presidential predictions and political decisions which candidate is going to win Because I care about that more than those other things. And maybe in I forget A and B. Maybe A and B will vote for whatever, or they probably have their own priorities.

Speaker 1:

Yeah, I think it definitely gives us, I'll say, influence as a collective. Ironically speaking, I'll give that a shot. I don't give that a shot. I think that sounds more reasonable than let Microsoft or open AI have them have all of that. But I think AI is great because it's creating their own tools. We want to specialize in AI tools, so there is a positive right there. I mean, there's so much tools, but let's be honest, some of them are going to be obsolete and killed off because of competition and market. That's just the way it goes.

Speaker 1:

This is not Disneyland. Or you want to drink the Kool-Aid and just think everybody's going to win what you do that? Just don't spread your nonsense to me. Just stay in Disneyland, okay, I like to be a reality. I only use Disneyland as a break, not as a permanent state of mind. All you need is help from there. So that's all I'm going to say about that.

Speaker 1:

You know, I think the previous guest had a good point about letting AI evolve too much to the point that it overwhelms and outperforms human adaptability. It has to kind of slow down the advancement and I have to agree with that. Yeah, advancement, but all advances so much to the point that, don't know, it becomes a flying car and just I don't know, goes haywire and causes disruption on air traffic or what have you. You know, that's a bit of. That's why I could pre jetsons, if you know that cartoon. Yes, I'm aging myself a little bit here.

Speaker 1:

You know, those old cartoons were still relevant when I was a kid, especially reruns, okay. And so if you call me a bull, I'm just gonna call you an idiot. You missed out some of the great cartoons, okay, and the jetsons or I'm using the jetsons, but their flying cars became enormous, big, super tall cities. You know, you better love heights at that point, because everything is high. If you fear heights, uh, yeah, you should not live in that universe at all. Okay, so we don't want ai to metaphorically let me calm you down, metaphor, figuratively go at that path. Okay, wow, we humans are just slowly climbing the mountain and we already overwhelm, some of us die in the process. So that's my little description of that and I am going to transition. You want, actually, before I do that, anything there's anything else you want to add before I wrap this up?

Speaker 2:

uh well, I mean just what you were saying right there, the slowdown of AI or the regulation or anything like that. It almost seems like it would be a fool's errand to me. It's like it almost seems like it's impossible to stop and and so you know embrace it. So you know, embrace it, you know uh, I don't know.

Speaker 1:

Hey, this is a mystery. Any answer sounds good. Until time will come, that's right, you know. So that's it. I'm having fun with this. If I get it wrong, I get it wrong, I'm not gonna die. If I get it wrong, remember it's a human organic brain. I predicted it's not ai, but anyways, just just blame sam altman for all of this. Anyways, um, because I want to take the childish easy way out of this situation. I'm kidding, um, but I actually like these conversations. It's getting to think. Be enlightened. You know, just just embrace ai, I guess, I think, without getting too stressed out about it, without being too fearful about it, you know, and that's all I can say.

Speaker 1:

So if you want to be job relevant or just be relevant period, just use AI. That's just the way. You know.

Speaker 1:

This is another massive technological evolution, slash, disruption. It's not the first time we've been through this as a species. So you know, I thought the medieval example before. I thought even the industrial example before. This is just one of those. Except it's digitized and you know, out-of-date jobs will go away. You know, back then cleaning the sewers was a human job in medieval times. So thank goodness that's gone. Okay, yeah, and freight elevator operators they're nearly extinct too. You know you're probably going to be doing. You're going to have to just learn how to clap your foot with the machine.

Speaker 1:

If you want to be relevant to the market, especially for those who are close to retiring, okay, become a Puritan if you hate technology that much, or a cave person if you want to quadruple down on that. Yes, forget, double, triple, quadruple down. Become a cave person. Have fun with that, live off the grid, whatever. Just don't come back to me saying I was right, because I am an egotistical person. I told you so I'm going to rub it down. I told you so. I told you so he's stupid. But whatever, let me stop being so mean.

Speaker 1:

Alright, let's do the shameless plug in. Give this product A try, just give it a try. Okay, and the link. I'm going to spell it out to you. It is satori net dot I o s s a t o r I n e t dot I o forward slash download. Okay, I'm gonna put that in the link as well. All right, and then give it a shot. I'm definitely gonna be giving it a shot and I'm pretty sure I'm gonna be enlightened. That's what I'm expecting and hopefully I'll make up and form more forms stock slash, crypto decisions. And you know, stop treating crypto like it's some joke. I mean, some YouTubers have made fun of it, especially when an idiot influencer decides to sell an idiotic token like FattyCoin. Yes, I'm referring to the YouTube drama. Yes, I'm going bottom of the barrel quality now. Now just to prove a point Boogie2988, the guy who faked His cancer.

Speaker 2:

It's horrendous.

Speaker 1:

Yeah, it's horrendous and I'm not going to talk more about that. He scammed his. Well, that, yeah, people I'm not sure they're fans People at least $10 thousand dollars. So don't get crypto coins from criminals and youtube influencers. The reason why I mentioned them is because, when it comes to cryptocurrency and I'm gonna sound like a judgmental priest okay, they are the devil. They will trick and deceive you.

Speaker 1:

Go to a community that's far more knowledgeable, as willing to help. If you sense toxicity, just go to the next person. Don't stay arguing with them Waste of time. Time is money, time is money. That's my only advice to you, and I will even Put some, some cards off. Well, actually, one. I'm gonna put card off one guest who's very knowledgeable of crypto, just to get this person's take instead, instead of some logan, paul and boogie. They are scam artists and the devil when it comes to cryptocurrency. Okay, that's all I'm going to say about that. Don't trust the influencer when it comes to that. They don't know anything. They just push the product for a quick buck, all right, so give this product a try. Okay, this is predictive.

Speaker 2:

It's free, um it is free, uh, and it's not very useful yet. So right now, especially in early you know the beta period and all that it kind of has been made to run automatically to watch the real world. But if people are willing to download it and run it for that, they probably also want their own data predicted. And so, you know, as the system evolves we're going to make it into a tool that people can use for their own data. You know you can route your own stuff to it, but it's not that good yet. So it's got a lot of evolving to do, but we'll get there that's also beta it's.

Speaker 1:

That's why he's saying it's a baby product. It's learning, it's growing, it's mature and it'll be fine. Okay, and trust me, chat gpt, to some extent is even overrated. Try to be all and all. No, ai could be um be all.

Speaker 1:

You know you can't service every single person. It's a great tool, don't get me wrong. It's a great tool, but it cannot service every single demand and that's why sometimes a specialized AI is better, because they just focus on one or two things, like, if you want better audio, you go to audio, for example, okay. And if you want a calendar, you go to audio, for example, okay. And if you want, uh, a calendar, you go to cal landly on my. There's a paid version. I will suggest, but I'm going to also put a card for that. That's for youtube only. Of another tech, uh, the bubbly woman and you know her of loyalist nurse denzel eden about calendar ai app, you use that one instead, okay, because it's it focuses on their particular niche, lane, specialty, whatever the heck you want to call it, okay. So try this thing out. I'm sure you have some influence over it and who knows, um joy will gain insight and even clarity, when he wants to how to achieve his vision with that.

Speaker 1:

You know the vision. Some people just give up their vision. We don't want that. Don't kill his vision. Be nice to his vision. Okay, give it a shot. It's free, for God's sakes. It's free, so it's risk-free, right? Just try it out. Based on that, it is not a worthless product. It is a developing product. That's something I strongly disagree with. It's a developing product. Give it a shot. You'll be early tester. Your inputs are valuable. Okay, that's what I'm gonna say about that. Yeah, give it a shot. You know he's not charging you money. There's nothing wrong with charging for money. At the end of the day, it's a business. So you want to create a good product, but you're not scamming, deceiving people, just like the YouTubers I called out. I'm sure as many other YouTubers have done that. But those are your two biggest ones and that's good enough, and I don't want to give them any more clout than they already gained. So enough of that. So, anything you want to add before I really really wrap this up?

Speaker 2:

No, thank you. Thank you very much for talking with me. That's awesome.

Speaker 1:

No, no problem, All righty, alrighty. So for my listeners, if you enjoy this episode, subscribe, follow whatever the word says, okay, or download if it's a do breast sprout or podcast, um, give a donation if you want one time, or reoccurring, and, and that's what I'm going to say If, for those who are give, who those will decide to pay for this podcast, I'm going to start having some exclusive episodes, things that come down the pipeline, okay, things that come down the pipeline with that one. But if you're hesitant, I don't blame you. I'm not going to beg for it. That's entirely up to you. It's completely optional. I don't do the whole e-begging stuff Because it's crap. So, for wherever or whenever you listen to this podcast, you have a blessed day, afternoon or night. Thank you.

People on this episode