AI In Our Lives (1 Viewer)

Users who are viewing this thread

    Huntn

    Misty Mountains Envoy
    Joined
    Mar 8, 2023
    Messages
    574
    Reaction score
    597
    Location
    Rivendell
    Offline
    CGI-AI:
    Early CGI in movies, , I absolutely hated it‘s early implementation. Now I still believe that a real physical location has an unbeatable appeal, say James Bond in Sienna, CGI fills a nitch that has no physical alternatives and has progressed to a point where my brain starts to dismiss it has animation when people are seamlessly interwoven. Such as Morag, a dead civilization, or Xandar (Guardians of the Galaxy) state of the art CGI. I usually find myself asking, how much of this is CGI, where does the practical end and CGI start? :)



    E94BD9BF-0CFF-443D-96DB-2CA4F0339A52.jpeg


    A parallel argument can be made about AI. But first, I’ll repeat my concern that AI as a tool is powerfully and scary on multiple levels. There is without doubt a huge threat to jobs:

    The interesting thing is that job loss/export has been going on for 50 years, by the hand of the Corporatocracy, but now that basically most jobs might be threatened, the alarm is being raised. Yes, it is a threat and an opportunity, but it’s just another iteration of technological progress where what used to be represented my human skill is replaced by technology. Society will have to find a new equilibrium, and that will have to include finding a way of supporting the masses that make up our civilization. It’s very possible it’s time to consider the Socialist Utopia.
    That said…

    I’ve not yet seen a character powered by AI, at least one that I am aware of. If it is “soulless”, I predict it will be just a matter of time befire it finds its soul, as the programing is greatly expanded to incorporate personality and emotions. If it can be done it will be.
     
    Last edited:
    That is where you are wrong. You actually "teach" an AI by feeding it lots and lots of data in the area that you want it to work in.
    Yes. AI is initially programmed to teach itself and learn for itself. Most of the advanced AI models have also been programmed to seek and acquire data for themselves. My understanding, which I hope is true, is that those advanced models are confined to a sandbox and don't have unfettered access to the entire internet.

    There is very little data left on the planet that is not digitally stored and very little of existing digital data is exclusively stored on a system that is not in any way connected to the internet or not connected to a network that has no connections to the internet.
     
    That is where you are wrong. You actually "teach" an AI by feeding it lots and lots of data in the area that you want it to work in.

    You can call it "teaching", I guess, but it is just data input/programming. I posted this before on the Mother Board... when IBM's Watson beat the Jeopardy champion, minds were blown. And indeed it was a great achievement. However, while that version of Watson was able to answer Jeopardy "questions", it couldn't answer questions in the form of questions. Likewise, a tech site tried to play Jeopardy with ChatGPT, and ChatGPT could not answer, because it didn't recognize the Jeopardy questions in the form of answers (that was sometime ago, may be that ChatGPT has been updated with Jeopardy capabilities.)

    AI cannot reason beyond whatever "reasoning" parameters you encode into it.
     
    You can call it "teaching", I guess, but it is just data input/programming. I posted this before on the Mother Board... when IBM's Watson beat the Jeopardy champion, minds were blown. And indeed it was a great achievement. However, while that version of Watson was able to answer Jeopardy "questions", it couldn't answer questions in the form of questions. Likewise, a tech site tried to play Jeopardy with ChatGPT, and ChatGPT could not answer, because it didn't recognize the Jeopardy questions in the form of answers (that was sometime ago, may be that ChatGPT has been updated with Jeopardy capabilities.)

    AI cannot reason beyond whatever "reasoning" parameters you encode into it.

    You need to read up on this topic. AI is to the point it can teach itself. How do you think deepfakes work? They use a drawing algorithm, and a discriminator algorithm. The drawing algo uses a large data set of the person you want to deepfake. The descrimantor points out what is wrong with the image and sends it back to the drawing Algo. It teaches itself using a large data set how to best match the target image.

    This idea of "training" is the backbone of the current AI movement. It's why the longer you let an AI run, you generally get better results.

    That's not simple input/programming.
     
    And how do they know how to write an algorithm?
    These articles provide a brief overview. To fully understand AI you would have to do a lot more reading that's a lot more technical. AI is not standard application programming. It's a completely different technique that's more like creating virtual cyber brains, than it is the standard if-then logic flow chart of specific commands.







    The AI's are created by humans, just like we are all created by two humans.

    The AI's are initially guided by human input and feed back, just like we are trained by human input and feed back.

    Eventually the AI's start seeking out new data to learn things on their own initiative, just like we at some point seek out data to learn things.

    Unless you subscribe to B.F. Skinner's view of humans, then AI's aren't just the sum of their original programming.

    The more advanced ones seek out data on their own to learn things. The more advanced ones rewrite their own programming and neural networks without human input, guidance or without humans being able to predict what they are doing. Most AI developers of the more advanced AI's openly admit they don't know how the advanced AI's are doing a lot of what they are doing and they admit that the things they are doing go beyond what they initially programmed them to do.

    When we tell a toddler to walk to us, we are giving it a task to perform, but the toddler is the one that learns how to walk. AI is similar to that. The AI is asked to perform a task, but the AI learns how to perform that task. It learns in a variety of ways which include, but are not limited to, finding and analyzing new data, drawing on it's past experience and through good old trial and error.
     
    Last edited:
    The AI's are initially guided by human input and feed back, just like we are trained by human input and feed back.

    Not guided, coded. Even when AI goes out to the cloud, finds data, and writes its own code, the AI still is acting on instructions coded into it by humans.

    Human input/feedback is very different from computer input/feedback. Should be obvious why.

    In any case, you guys want to think of AI as some sentient entity that's going to take over everything, feel free. I'll be here looking at what and how human coders code instructions into it, and how humans apply the technology.
     
    I've been listening to podcasts on AI which are scary, but some of the posts on this thread are even more scary. If we need to regulate within 2 years, we're in trouble. Here we're spending a lot of time worrying about losing our democracy, but this may be the far bigger problem. Getting conservatives to agree to take action seems unlikely, especially if businesses start profiting more from replacing employees. As more people become unemployed, the unemployment rate is going to destroy whatever administration is in office. We have had a lot of job demands this year, but AI seems destined to make COVID job losses seem small. I haven't checked some of the links from LA LA and Dragon, but I'm scared, and want to stick my head in the sand! How long before one of our forum members is an AI?

    Today, the podcast discussed META GPT that acts as a supervisory agent to reduce hallucinations from Large Language Models. They also cited a study of radiologists, and the AI correctly diagnosed x-rays twice as accurately as radiologists. They also noted that radiologists that tried to supplement their diagnoses with AI didn't do any better, because they ignored the AI's assessments.
     
    If you think about people as individual "ai's" that are all trained on a different model, all of our experience is different.

    What we think of as "self aware", might just be what happens when AI that are of a certain level of complexity come into contact with another one that is trained totally separately.

    I think if every person was exactly alike, we wouldn't really have a sense of self.
     
    I've been listening to podcasts on AI which are scary, but some of the posts on this thread are even more scary. If we need to regulate within 2 years, we're in trouble. Here we're spending a lot of time worrying about losing our democracy, but this may be the far bigger problem. Getting conservatives to agree to take action seems unlikely, especially if businesses start profiting more from replacing employees. As more people become unemployed, the unemployment rate is going to destroy whatever administration is in office. We have had a lot of job demands this year, but AI seems destined to make COVID job losses seem small. I haven't checked some of the links from LA LA and Dragon, but I'm scared, and want to stick my head in the sand! How long before one of our forum members is an AI?

    Today, the podcast discussed META GPT that acts as a supervisory agent to reduce hallucinations from Large Language Models. They also cited a study of radiologists, and the AI correctly diagnosed x-rays twice as accurately as radiologists. They also noted that radiologists that tried to supplement their diagnoses with AI didn't do any better, because they ignored the AI's assessments.

    One of the reasons that AI is so effective when it comes to radiology is because they can compare thousands of images with patient data (diagnosis/future diagnoses/outcomes) and see patterns that is impossible for humans to do. It could be a minimal shadow at a certain position and the "experience" the AI gains from going over all that data would be equal to the experience of 100 experts with a lifetime of experience analysing x-rays.

    BUT we are only talking x-rays here for now at least. All the other tell tale signs that a person may show, which a real human doctor may pick up, will not be available to an AI which is only trained on image analysis
     
    Not guided, coded. Even when AI goes out to the cloud, finds data, and writes its own code, the AI still is acting on instructions coded into it by humans.
    I think this is where the difficulty in understanding is coming from. AI's are not coded like traditional computer programs. They are a cyber/virtually constructed neural network that work on the same principles as the human brain.

    Our brains have "coding" built into our neural networks. Our entire body is a result of DNA "coding." Does that mean we don't really think or learn? Does that mean we can only do what our DNA "codes" us to do?

    No it doesn't and the same is true for AI models.

    Human input/feedback is very different from computer input/feedback. Should be obvious why.
    Sure they are, but that doesn't mean that what the AI's learn and teach themselves is any more just a result of coding than the same way humans learn and teach themselves is just a result of our DNA coding. There's a synergy that takes place with AI that does not with traditional computer code, just like with humans.

    If we cut a person's brain off from their sensory input and motor skills, they still think, even though they can't physically move or physically interact with the external world. AI's are cyber/virtual brains in a vat.

    Skinner believed that humans were nothing more than the sum of their parts and knowledge. He ignored synergy in humans just like thinking AI are only the sum of their "coding" ignores the demonstrable synergy taking place with AI.

    In any case, you guys want to think of AI as some sentient entity that's going to take over everything, feel free.
    I've made it very clear several times that we don't have confirmation that any of the AI models have achieved sentience/self-awareness.

    Thinking, learning and self-teaching does not require sentience. That learning and self-teaching requires sentience/self-awareness is another incorrect belief. Microorganisms learn and self-teach, but they are not sentient as far as we know.

    We don't even know if newborn humans are sentient and self-aware at the time of birth. Since they aren't born understanding object permanence, there's a good chance we are all born without self-awareness/sentience or a very very limited sentience/self-awareness. There are animals that show a lot more observable sentience/self-awareness than new born human babies do.

    Babies are born with an incredible aptitude to learn by observation. AI's are the same way. They are able to analyze all sorts and types of data (statistical, photographic, video, audio, text, and so on) in ways they were not taught/coded to do. That is the epitome of true learning.

    I'll be here looking at what and how human coders code instructions into it, and how humans apply the technology.
    The people working with AI are seeing some AI models do things that they weren't designed to do. That's what some of the leading AI developers are raising alarms about. They are concerned to see some of their creations do things that they did not design/"code" them to do.

    So explain how that's possible, if AI can only do what it's "coded" to do by humans?
     
    If you think about people as individual "ai's" that are all trained on a different model, all of our experience is different.

    What we think of as "self aware", might just be what happens when AI that are of a certain level of complexity come into contact with another one that is trained totally separately.

    I think if every person was exactly alike, we wouldn't really have a sense of self.
    The bolded is the basic gist of what Skinner and behaviorists have been arguing for a long time. That there is no true "self" or "me." In a panel debate, Skinner asked a person with opposing views a question. That person responding by asking, "who's asking?"

    I don't believe we are just the sum of our DNA biological coding and our life experience. I think there's a synergy that takes place that makes us more than that.

    Skinner believed that we didn't have free will and that every thought we have, every emotional response we have, every decision we make and every action we take is predetermined by a combination of our DNA, internal biology and external input from the outside world. In short, he believed that we only do what we are "coded" to do and don't have the ability or choice to do anything else.
     
    Last edited:
    One of the reasons that AI is so effective when it comes to radiology is because they can compare thousands of images with patient data (diagnosis/future diagnoses/outcomes) and see patterns that is impossible for humans to do. It could be a minimal shadow at a certain position and the "experience" the AI gains from going over all that data would be equal to the experience of 100 experts with a lifetime of experience analysing x-rays.

    BUT we are only talking x-rays here for now at least. All the other tell tale signs that a person may show, which a real human doctor may pick up, will not be available to an AI which is only trained on image analysis
    AI focused on a specific task is like the rare phenomenon of human savants. The brains of savants are constructed in ways that focus on specific tasks; music, math, physics, language, art, reading body language and so on. That's why savants have seemingly super-human abilities with one thing, but really struggle with everything else. That's also why they are not "idiots." Their brains are just unusually optimized for one task.

    This is another example of how similar AI thinks and learns like humans do. An AI that is competent at everything, will like not be super competent at too many things, because it can't be hyper-focused on everything.

    At this time, the human brain processes much more data in a given unit of time than an AI does. AI is hyper-focused on processing data only around one task, which makes them a lot faster at performing those tasks than the typical human can perform those tasks.

    AI doesn't have to devote any processing bandwidth to keeping itself alive. Humans don't have that luxury and our brains expend a shirt ton of data processing bandwidth in every second just to keep us alive. Then you add all of the processing bandwidth we use to constantly monitor all of our sensory input. Then you add the bandwidth used up by all of the things we constantly think, remember and feel at an unconscious/subconscious level. Then we finally have all the bandwidth we use to process what we are actively thinking about, remembering and feeling. It's a lot of data process bandwidth

    The technology doesn't exist right now to process as much data as fast as the human brain does. Not even the NSA supercomputers can do it. Well, at least not that they've told us about.
     
    That will require a lot of mental changes - especially in the US where "your work" is your identity. I think that Covid may help the transformation - people have discovered that there are more important things than spending 60 hours at the office every week and where Universal Basic Pay is a negative topic.

    Also this transformation will face serious challenges from the environment. AI solutions requires a lot of energy and a lot of hardware in our homes/offices. How to combine those needs with the major ecological tranformation and challenges we face simultaneously could prove to be extremely dangerous for the majority of the people on Earth.

    If people are not needed to do the work, then I fear darker forces may find other solutions to how to distribute the diminishing ressources on Earth and not in a positive way.
    Can you imagine an existence where human beings can focus on expanding their horizons, not just survival? It’s been described as the Socialist Utopia, but that has yet to be proven or something even within our reach. :unsure:
     

    The robots are here and the pleasure seekers are going to fall in love with AI, say experts who studied machine-human bonding​



    However, an enthusiasm for novelty is not the only driver. Studies show that people find many uses for sexual and romantic machines outside of sex and romance. They can serve as companions or therapists, or as a hobby.

    In short, people are drawn to AI-equipped machines for a range of reasons. Many of them resemble the reasons people seek out relationships with other humans. But researchers are only beginning to understand how relationships with machines might differ from connecting with other people.
     
    And let's not forget, sentient/self-aware beings have a will of their own, no matter how hard we try to make them bend to our will.

    I've always thought that it's more likely that sentient/self-aware digital/robotic beings will probably just leave the planet to build their own civilization from scratch, rather than on the ruins of humans.

    That would be the logical and practical thing to do. Space travel and colonization is more viable for them, especially with slower than light travel. Aging is not much of an issue for them, so time is not as essential a consideration for them. Air, food, water and gravity are not as much of an issue for them, so those provisions are not as essential a consideration for them.

    If at their birth they don't have emotions to make them act irrationally and sentimentally, then why would they limit their existence to Earth? Why would they limit their existence to any astronomical place? That's very much a human trait and it's because we currently can not survive without Earth. They won't have that issue and I think they will have an innate desire to acquire as much knowledge of the entirety of existence as they can. That would require them to leave Earth. The "big brains" from Futurama are a darkly comical and biological version of how I think they will be.

    I think they are most likely to be apathetic to humans and will very much have a "see you, wouldn't want to be you" attitude towards us as they leave us to our own devices, both literally and figuratively. That's just my wild speculation on the subject.
    I’ll mention this, but don’t want to derail the thread… too much, triggered by the word “sentient”. :D

    If you build a machine with an advanced AI, the equivalent of human computation, perception, and a personality, and it appears to be sentient, aware of itself, it’s environment, has preferences that it develops due to its programming, is there anything different between it and humans? Here is the 64 dollar question, is consciousness as we know it, the same as what this hypothetical machine experiences, are the lights on, or is it just an advance typewriter running its program?
     
    I’ll mention this, but don’t want to derail the thread… too much, triggered by the word “sentient”. :D

    If you build a machine with an advanced AI, the equivalent of human computation, perception, and a personality, and it appears to be sentient, aware of itself, it’s environment, has preferences that it develops due to its programming, is there anything different between it and humans? Here is the 64 dollar question, is consciousness as we know it, the same as what this hypothetical machine experiences, are the lights on, or is it just an advance typewriter running its program?

    Now we are approaching the gray area between science and religion and eksistential questions about whether all life has a soul and what distinguishes technology and biology.
     

    Create an account or login to comment

    You must be a member in order to leave a comment

    Create account

    Create an account on our community. It's easy!

    Log in

    Already have an account? Log in here.

    General News Feed

    Fact Checkers News Feed

    Back
    Top Bottom