It'll certainly be a lot of job disruption. Um, because what's gonna happen is robots will be able to do everything better than us. I'm include, I'm quitting. I mean all of us. Cuz like something like 12% of jobs are transport, transport will be one of the first things to go fully autonomous. But when I say everything, like the robot will be able to do everything.
I have exposure to the ver, the very most cutting edge, um, ai, um, Uh, and I think people should be really concerned about it. Um, I keep so sounding the long bell, but you know, until people see like robots going down the street killing people, like they don't know how to react. You know? Cuz it seems so ethereal AI is a rare case where I think we need to be proactive in regulation instead of reactive.
Um, because I think by the time we're reactive in AI regulation, it's too late. Ai. A fundamental existential risk for human civilization, and I don't think people fully appreciate that. If you, if, if you're, if you're talking to digital super intelligence and can't tell if that is a computer or a human, like let's say you're just having conversation over a phone or.
A video conference or something where looks like a person makes all of the right, uh, uh, inflections and movements and, and all the small subtleties that constitute, uh, human, uh, and, uh, talks like human makes mistakes like a human and, and you literally just can't tell. Is this, are you very accomplishing with a person or, or a, an ai?
Might as well, might as well be human. So on a darker topic, you've expressed serious concern about existential threats of ai. It's perhaps one of the greatest challenges that our civilization faces, but since I would say we're kind of an optimistic descendants of vapes, perhaps we can find several paths of escaping the harm of ai.
So if I can give you three. Maybe you can comment, which do you think is the most promising one is scaling up efforts on a AI safety and beneficial AI research in in hope of finding an algorithmic. Or maybe a policy solution. Two, is becoming a multi-planetary species as quickly as possible. And three is merging with AI and, and riding the wave of that increasing intelligence, uh, as it continuously improves.
What do you think is most promising, most interesting as a civilization that we should invest in? I think there's, there's a lot, a tremendous amount of investment going on in AI where there's a lack of investment is in ai. And there should be, in my view, a government agency that oversees anything related to AI to confirm that it is, does not represent a public safety risk.
Just as there is a regulatory authority for, There's like the Food and Drug Administration, there's, that's a for. Automotive safety, there's the FAA for aircraft safety, where it generally comes a conclusion that it is important to have a government referee or, or a referee that is serving the public interest in, in ensuring that things are safe when, when there's a potential danger to the public.
Um, I would argue that, uh, AI is unequivocally, uh, something that has potential to be dangerous to the public and therefore should have a regulatory agency just as other things that are dangerous to the public have a regulatory. Right. But lemme tell you, the problem with this is that the government moves very slowly.
Usually the way a regulatory agency comes into being is that something terrible happens. There's a huge public outcry. And years after that, there's a regulatory agency or or rule put in place. It takes something like, like seat belts. It was known for, I dunno, a decade or more, that seat belt. Have a massive impact on, uh, safety and, and saved so many lives and serious injuries.
And the car industry fought the requirement. Put seat belts in tooth the nail. That's crazy. Yeah, and, and I don't know, hundreds of thousands of people probably died because of that. And they said people wouldn't buy cars if they had seat belts, which is obviously absurd. You know? Or look at the tobacco industry and how long they fought any thing about smoking.
That's part of why I help make that movie. Thank you for smoking. You can sort of see just how pernicious it can be when you have these companies effectively achieve regulatory capture of of government the bad. People in the AI community refer to the advent of digital super intelligence as a singularity.
That that is not to say that it is good or bad, but it, that it is very difficult to predict, uh, what will happen after that point and, and that there's some probability it'll be bad. Some probably it'll be, it, it'll be good, but if they want to affect that probability and have it be more good than bad, right now, just the data we have regard.
How the brain works is, is very limited. You know, we've got fmri, which is that, that's kinda like putting a, you know, a stethoscope on the outside of a factory wall and, and then putting it like all over the factory wall and you can sort of hear the sounds, but you don't know what the machines are doing really.
Now you, it's hard. You can infer a few things, but it's very full brushstroke. In order to really know what's going on in the brain, you really need, you have to have high precision sensor. , and then you want to have stimulus and response, like if, if you trigger a neuron, what, how, how do you feel? What do you see?
How does it change your perception of the world? Actually think the machine side is far more malleable than the biological side by, by a huge amount. So it'll be the, the machine that adapts to the brain. That's the only thing that's possible. The. Can't adapt that well to to, to the machine. You can't have neurons start to regard an electrode as like another neuron.
Cause like neurons just, there's like the pulse and so something else is pulsing. See, So there's, there is that. There is that, that elasticity in the interface, which we believe is, is something that can, can happen. But the vast majority of malleability will have to be on the machine side. There will be some adjustment to the brain cuz there's, there's gonna be something reading and simulating the, the brain and so it will adjust to, to that thing.
But, but most, the vast majority of the adjustment will be on the machine side. This is just this. It has to be that otherwise it will not work. Ultimately, like we, you know, we currently operate on two layers. We have sort of a limbic like prime primitive brain layer, which is where all of our kind of impulses are, are coming from.
It's sort of like we've got, we've got like a monkey brain with a computer stuck on it. That's, that's the human brain . And a lot of our impulses and everything are driven by the monkey brain and the, the computer, the cortex, uh, is constantly trying to make the mon monkey brain happy. It's not the cortex.
Steering the monkey brains. The monkey brains steering the cortex. Cortex is like what we call like human intelligence, you know? So it's like the, that's like the advanced computer relative to other creatures. Uh, other creatures do not have either really, they, they don't have the computer or they have a very weak computer relative to hum.
It sort of seems like surely the really smart thing should control the dumb thing. But actually the dumb thing controls the smart thing. I mean, we're a neural. And, and that, you know, AI is basically neural net. So it's like digital neural net will interface with biological neural net and hopefully bring us along for the ride, you know?
But the vast majority of our, of of our. Of our intelligence will be digital. So like, think of like the, the difference in intelligence between your, your cortex and your Olympic system is gigantic. Your, your, your Olympic system really has no comprehension of what the hell the cortex is doing. Um, you know, it's just literally hungry, you know, or tired or angry, or, Taxi or something, you know, And then it, that communicates that that impulse to the cortex and deals, the cortex to go satisfy that.
People generally don't wanna, uh, lose the cortex either. Right. So they're like having the cortex and the Olympic system. Yeah. Uh, and, and then there's a tertiary layer, which will be digital super intelligence, and I think there's. Room for optimism given that the cortex, the, the cortex is very intelligent and Olympic system is not, and yet they work together well, Perhaps they can be a tertiary layer where digital super intelligence lies and that that will be vastly more intelligent than the cortex, but still coexist, peacefully, and end of an iron manner with the cortex and limbic.
Comments
Post a Comment