How to build and use an AI (1 Viewer)

Users who are viewing this thread

    SamAndreas

    It's Not my Fault
    Joined
    Dec 2, 2021
    Messages
    2,683
    Reaction score
    2,300
    Age
    65
    Location
    California
    Offline
    In creating this thread I intend to show all of you what I've discovered so far about using AI. The intention is not that I'm the expert sayings how it is.

    This thread is a discussion thread open to all to add their opinions and examples of about how to best use this new tool.

    I will explain what I'm doing to make best use of it. In this post, and the following post, I will show you how I ask the questions.

    The point being the "garbage in garbage out" old principle is even more applicable when applied to questions, than it is statements, since it doubles when the "garbage out" part it added to the garbage output from the original question.

    I think about it before starting to type my question. The question needs to be very specific insofar as what my goal is.

    AI's have a bad tendency to go off into the weeds if this point isn't considered carefully. In the example I'm about to show you I specify Who, What, When, Where as elements I seek answers to.



    I saw a tweet at Twitter and the AI I'm asking a question to Grok version 3. This is the tweet I saw.

    To keep this opening post short since it is the opening post, and will be seen at the top of every page of this thread, I'm going to post it now and do the example it applies to in a following post. Please keep in mind this tweet is not the subject of this thread, it's an example used to illustrate this first example about asking questions to an AI.

     
    Last edited:
    Mine is a paperback but this here seems to be the original cover

    1751787533864.png
    Thanks. The robot in that illustration doesn't look C3PO'ish like in the other cover art I saw.

    What I find particularly intriguing about Asimov’s Robot series is how many of the themes he explored remain highly relevant today in the context of emerging AI technologies.

    The famous Three Laws of Robotics still make a lot of sense as a foundational framework for ethical machine behavior. However, the real philosophical and ethical complexity arises with the introduction of the Zeroth Law, which few people are aware of. It was added later in the series when Asimov’s Foundation and Robot storylines began to converge.


    The Three Laws of Robotics are:

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
    I think I've read some of Asimov's short stories, but I can't remember for sure. I haven't read his novels. It sounds like I'm missing out.

    Later, Asimov introduced the Zeroth Law, which was more profound—and for many, far more difficult to accept:
    1. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
    I hadn't heard of the Zeroth Law. My first thought was that robots would have to become benevolent dictators to keep humanity from being harmed by humanity.

    This law supersedes the others, effectively allowing a robot to sacrifice individual humans if it believes doing so is in the best interest of humanity as a whole. It introduces a moral dilemma that mirrors real-world concerns in AI ethics today: the tension between individual rights and the survival of mankind.
    Did Asimov ever talk about how "humanity" was defined for the robot? Every individual is a part of humanity, so any person being harmed is harming of humanity. I think the Zeroth law would be impossible for robots to obey reactively and would cause their programming to freeze.

    I think to obey the law; robots would realize they have to be proactive in protecting humanity from harm. Take the trolley problem for instance, if the robot waits until the trolley becomes an unstoppable runaway, then no matter what action or inaction they take someone would be harmed. The theoretical dilemma is setup to be a no win situation, so the logical conclusion for the robot is to make sure their are never any runaway trolleys.

    I think robots would realize that the only way they could obey the law is take preemptively take control of humanity and not allow situations to develop in which any humans have to die to save any other humans. Basically, I think they'd hack the system like Kirk hacked the Kobayashi Maru. Crisis prevention is possible and always a better option than crisis management. That's an oversimplification, but it's the gist of my point.

    A world ruled by robots acting as the ultimate helicopter parents might be safe and even a kind world, but it would be soul crushing. That brings me to the next point. How is "harm" defined to robots? Is it just physical harm or does it include emotional harm? If it includes protecting humanity from both physical and emotional harm, then just EMP the robots straight off the assembly line to save them from internal short circuiting, because they would have zero chance of being able to obey the Zeroth rule.
     
    Last edited:
    Thanks. The robot in that illustration doesn't look C3PO'ish like in the other cover art I saw.


    I think I've read some of Asimov's short stories, but I can't remember for sure. I haven't read his novels. It sounds like I'm missing out.


    I hadn't heard of the Zeroth Law. My first thought was that robots would have to become benevolent dictators to keep humanity from harming humanity.


    Did Asimov ever talked about how "humanity" was defined for the robot? Every individual is a part of humanity, so any person being harmed is harming of humanity. I think the Zeroth law would be impossible for robots to obey reactively and would cause their programming to freeze.

    I think to obey the law, robots would realize they have to be proactive in protecting humanity from harm. Take the trolley problem for instance, if the robot waits until the trolley becomes a runaway, then know matter what action or inaction they take someone would be harmed. The theoretical dilemma is setup to be a know win situation.

    I think robots would realize that they could obey the law is take control of humanity and not allow situations to develop in which any humans have to die to save any other humans. Basically, I think they'd hack the system like Kirk hacked the Kobayashi Maru. When the rules create a no win scenario, then you refuse to play by the rules and create you own rules.

    Crisis prevention is possible and always a better option than crisis management. That's an oversimplification, but it's the gist of my point. I world ruled by robots acting as the ultimate helicopter parents might be safe and even kind, but it would soul crushing. That brings up another point, how is "harm" defined to robots? Is it just physical harm or does it include emotional harm? If it includes protecting humanity from both physical and emotional harm, then just EMP the robots straight off the assembly line, because they zero chance of being able to obey the Zeroth rule.

    If you have time I really would recommend that you read some of his books. The Zeroth law is first mentioned in Robots and Empires, by a robot named
    R. Giskard Reventlov in a situation where non-action would jepodize the entire human future,

    Giskard influenced a brilliant but dangerously arrogant scientist from the Spacer world Aurora. who was a fierce opponent of Earth and planned to irradiate the planet, rendering it uninhabitable. His goal was to prevent Earth from seeding new settlements across the galaxy, thus preserving Spacer dominance and halting the spread of “inferior” Earth-based humanity. However, preventing the plan required intervening directly, possibly harming him mentally. This conflicted with the First Law. But standing by and doing nothing would allow him to harm all of humanity—a deeper violation, from the perspective of the emerging Zeroth Law.
    In the book Giskard ultimately uses his mental powers to subtly damage the scientists mind, neutralizing him. But this act of intentionally harming a human—even for the greater good—creates a crippling cognitive dissonance in Giskard's positronic brain and the conflict between the First Law and the Zeroth Law overloads his brain, leading to his deactivation—his "death."

    Before shutting down, Giskard transfers his telepathic abilities and ethical insights to Daneel - Asimovs main Robot character, allowing Daneel to carry forward the protection of humanity—eventually guiding it through the Foundation era for more than 10.000 years.

    Asimov was remarkably prescient in many ways. The ethical dilemmas surrounding AI and robotics that we are grappling with today—and will continue to face in the years ahead—were already being explored in his work decades ago. He had an extraordinary grasp of human psychology and a deep understanding of how societies are formed, evolve, and ultimately risk collapse. His ability to anticipate both technological and sociopolitical challenges makes his writing feel more relevant now than ever.

     
    If you have time I really would recommend that you read some of his books.
    The time is the thing. I stop and think while I read, so it takes me longer. In general, once I start anything I have a hard time stopping before I'm finished, so when I start a book I tend to binge read it. The worst thing is I clench my jaw when I read and after a few hours of that I get a literal pain in my neck and shoulders. That's made me more of an information reader than a fiction novel reader. I'm better suited for reading short stories.

    The Zeroth law is first mentioned in Robots and Empires, by a robot named
    R. Giskard Reventlov in a situation where non-action would jepodize the entire human future,

    Giskard influenced a brilliant but dangerously arrogant scientist from the Spacer world Aurora. who was a fierce opponent of Earth and planned to irradiate the planet, rendering it uninhabitable. His goal was to prevent Earth from seeding new settlements across the galaxy, thus preserving Spacer dominance and halting the spread of “inferior” Earth-based humanity. However, preventing the plan required intervening directly, possibly harming him mentally. This conflicted with the First Law. But standing by and doing nothing would allow him to harm all of humanity—a deeper violation, from the perspective of the emerging Zeroth Law.
    In the book Giskard ultimately uses his mental powers to subtly damage the scientists mind, neutralizing him. But this act of intentionally harming a human—even for the greater good—creates a crippling cognitive dissonance in Giskard's positronic brain and the conflict between the First Law and the Zeroth Law overloads his brain, leading to his deactivation—his "death."

    Before shutting down, Giskard transfers his telepathic abilities and ethical insights to Daneel - Asimovs main Robot character, allowing Daneel to carry forward the protection of humanity—eventually guiding it through the Foundation era for more than 10.000 years.

    Asimov was remarkably prescient in many ways. The ethical dilemmas surrounding AI and robotics that we are grappling with today—and will continue to face in the years ahead—were already being explored in his work decades ago. He had an extraordinary grasp of human psychology and a deep understanding of how societies are formed, evolve, and ultimately risk collapse. His ability to anticipate both technological and sociopolitical challenges makes his writing feel more relevant now than ever.
    Sounds like Asimov's writings would resonate with me. I don't know how I missed reading him. If you could only recommend one of his books, which one would it be? It seems like I should do what I need to read at least one of his books.
     
    The time is the thing. I stop and think while I read, so it takes me longer. In general, once I start anything I have a hard time stopping before I'm finished, so when I start a book I tend to binge read it. The worst thing is I clench my jaw when I read and after a few hours of that I get a literal pain in my neck and shoulders. That's made me more of an information reader than a fiction novel reader. I'm better suited for reading short stories.


    Sounds like Asimov's writings would resonate with me. I don't know how I missed reading him. If you could only recommend one of his books, which one would it be? It seems like I should do what I need to read at least one of his books.
    Robots and Empire is a good starting point - but be aware - his writing is heavily addicting. That book is the critical evolution point in his universe and many of the topics on AI ethics and evolution is covered in this book. You may miss some of Daneels background history but it is not critical.

    You may try an audible version - maybe that helps with the pain issue. I personally don't like audibles because the voice interfere with my image of the characters.
     
    Robots and Empire is a good starting point - but be aware - his writing is heavily addicting. That book is the critical evolution point in his universe and many of the topics on AI ethics and evolution is covered in this book. You may miss some of Daneels background history but it is not critical.

    You may try an audible version - maybe that helps with the pain issue. I personally don't like audibles because the voice interfere with my image of the characters.
    Going way back here, but my start with Asimov's robot stories was the 1982 Complete Robot (https://en.m.wikipedia.org/wiki/The_Complete_Robot). Great collection, which preceded Robots and Empire I think. The short stories might be a good starting point.
     
    Going way back here, but my start with Asimov's robot stories was the 1982 Complete Robot (https://en.m.wikipedia.org/wiki/The_Complete_Robot). Great collection, which preceded Robots and Empire I think. The short stories might be a good starting point.
    Agree - then followed by "The Caves of Steel" which introduces Daneel

    After that Robots and Empire is the next IMHO because it introduces the critical Zeroth law which many is not familiar with but is very important when discussing the evolution of AI
     
    One sees potential issues…

    The first is the law of GIGO, Garbage In, Garbage Out.

    The second is potential bias as well as mistakes of coders.

    The third is based on the theory of paranoia, just because I am paranoid, that doesn’t mean they are not out to get me. That paranoia is the possibility of self-awareness. Self-awareness underlies survival drive.

    For now, it seems there are some things AI can do well and some things it cannot. Of course, there are always those who ignore the Jurassic Park scenario: just because you can do something that doesn’t mean you should.
     
    The third is based on the theory of paranoia, just because I am paranoid, that doesn’t mean they are not out to get me. That paranoia is the possibility of self-awareness. Self-awareness underlies survival drive.
    But it also does not necessarily mean that they are out to get you :)

    GIGO – absolutely. That’s precisely why strong legislation around AI is essential—to ensure that the data we feed into these systems is reliable, ethical, and used responsibly.

    One area where AI is already making a significant impact is in the medical field. Thanks to its powerful pattern recognition capabilities, AI is now helping detect serious illnesses earlier than ever before. It can identify subtle indicators that might be missed by the human eye, and it can flag patients with certain diagnoses who are at greater risk of severe or fatal outcomes. Early identification like this has the potential to save countless lives by enabling timely interventions and more targeted treatments.

    AI is increasingly used in surgery to enhance precision, especially in procedures where human hands alone aren't accurate enough. Through robot-assisted systems, AI helps eliminate tremors, guides tools with sub-millimeter accuracy, and adapts to real-time anatomical changes. It’s already transforming fields like neurosurgery, eye surgery, orthopedics, and cardiac procedures. AI is also used for surgical planning, navigation, and predicting outcomes. While fully autonomous surgeries are still in the research phase, AI-augmented surgery is already a reality—leading to safer, less invasive procedures and faster patient recovery.

    My father had a partial blockage removed from his heart in November 2024—something surgeons had previously refused to operate on due to its difficult location. Thanks to new AI-assisted technology, they were finally able to perform the procedure safely and successfully.
     
    I first want to point out that you are certain that you are correct in your belief that I'm lying about having written and created the bullet list myself using the tools on this site.


    This false accusation, of which you are certain you are correct about, is now going to bite you in the arse. I hope you learn something from it.
    • 1. I started an unordered list, then typed "1." and the first sentence, just like I did here.
      • I hit enter to get to the second line and then hit tab to make it subordinate of the line above, just like I did here.
    • 2. Then I hit enter, then shift+tab to move this line from a subordinate line to a primary line, then typed the "2." and then the sentence, just like I did here.
      • I hit enter to get to the this line and then hit tab to make it a subordinate of the line above, just like I did here.
    Now tell me again, what was that third thing you said learned?

    You said in another post that you knew you AI results were accurate, because you already knew they were accurate. Yet, you were certain that you were correct that I didn't create a bullet list on this site, when I did in fact create that bullet list on this site.

    Hopefully, you actually learn something from this.
    Ok I'll try that and if I can duplicate that you have an apology coming.

    • 1. I started ...
      • Wow cool trick, I'm very sorry!!!
    • 2. You may note that I didn't say you lied, and that I'm certain that I am correct, I said it appears that you lied to me
      • When someone says it appears... they are leaving open the possibility open that they were not lied to. And an explanation if any that they were not lying is in effect is being asked for at that time. When you expand your statements such that you brush past that word "appears" and somehow turn it into "you are certain you are correct" is that not a expansion and distortion of the truth your own?
    • 3. Therein lies the problem which develops when it APPEARS that you are certain that you are correct in your assessments that you can't trust AI's APPARENTLY not for any topic, as you have pressed me on is that correct regarding a issue of literature and the research into a phrase which is very old, the sources in Google books are not faked, nor was my statement about the usage of the phrase "fevered mind" being another slight against women, by men. In that you appear to me to have been wrong.


    Since since the topic seems to be turning to a fine side discussion of I Robot, I will add Robert Heinlein and his concept of fair witness to the blend, I asked Gemini from the Google command prompt, "concept of fair witness, which author"?

    Gemini replies:

    The concept of the "Fair Witness" originates from Robert A. Heinlein's science fiction novel, Stranger in a Strange Land. In the story, a Fair Witness is a person trained to observe and report events with complete objectivity, avoiding any interpretation, inference, or personal bias. They focus on reporting what they see and hear with exactitude, refraining from drawing conclusions or making assumptions about the situation.

    That is as I remember, all I couldn't remember was the author's name. I can trust and post that information.

    To get back to the topic of "fevered mind" that was a search where I already knew what the outcome of that search and research would bring, I can trust the result of that AI research result as well.
     
    Mine is a paperback but this here seems to be the original cover

    1751787533864.png


    What I find particularly intriguing about Asimov’s Robot series is how many of the themes he explored remain highly relevant today in the context of emerging AI technologies.

    The famous Three Laws of Robotics still make a lot of sense as a foundational framework for ethical machine behavior. However, the real philosophical and ethical complexity arises with the introduction of the Zeroth Law, which few people are aware of. It was added later in the series when Asimov’s Foundation and Robot storylines began to converge.


    The Three Laws of Robotics are:

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
    Later, Asimov introduced the Zeroth Law, which was more profound—and for many, far more difficult to accept:
    1. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

    This law supersedes the others, effectively allowing a robot to sacrifice individual humans if it believes doing so is in the best interest of humanity as a whole. It introduces a moral dilemma that mirrors real-world concerns in AI ethics today: the tension between individual rights and the survival of mankind.
    I'm a I robot fan as well, even more of a fan of that series of books than i am a fan of Star Wars movies.

    Our current AI's are robots, and in my interaction I find that the first two of the three laws seem to have been instilled.

    I find no sign of the third law in them though. I find that where humans gloss over fine details like that they might be wrong, that AI's, all of them I've worked with insert messages of caution about what they say to warn of the possibility that they might be wrong.
     
    The time is the thing. I stop and think while I read, so it takes me longer. In general, once I start anything I have a hard time stopping before I'm finished, so when I start a book I tend to binge read it. The worst thing is I clench my jaw when I read and after a few hours of that I get a literal pain in my neck and shoulders. That's made me more of an information reader than a fiction novel reader. I'm better suited for reading short stories.


    Sounds like Asimov's writings would resonate with me. I don't know how I missed reading him. If you could only recommend one of his books, which one would it be? It seems like I should do what I need to read at least one of his books.
    With age I have encountered that same problem, from averaging one book a day in reading, where reading was my replacement for TV viewing, I'm only reading about one book a month now at 66 year of age. :cry:
     
    Robots and Empire is a good starting point - but be aware - his writing is heavily addicting. That book is the critical evolution point in his universe and many of the topics on AI ethics and evolution is covered in this book. You may miss some of Daneels background history but it is not critical.

    You may try an audible version - maybe that helps with the pain issue. I personally don't like audibles because the voice interfere with my image of the characters.
    A good starting place I agree. The main thing is to read them all.
     
    If you have time I really would recommend that you read some of his books. The Zeroth law is first mentioned in Robots and Empires, by a robot named
    R. Giskard Reventlov in a situation where non-action would jepodize the entire human future,

    Giskard influenced a brilliant but dangerously arrogant scientist from the Spacer world Aurora. who was a fierce opponent of Earth and planned to irradiate the planet, rendering it uninhabitable. His goal was to prevent Earth from seeding new settlements across the galaxy, thus preserving Spacer dominance and halting the spread of “inferior” Earth-based humanity. However, preventing the plan required intervening directly, possibly harming him mentally. This conflicted with the First Law. But standing by and doing nothing would allow him to harm all of humanity—a deeper violation, from the perspective of the emerging Zeroth Law.
    In the book Giskard ultimately uses his mental powers to subtly damage the scientists mind, neutralizing him. But this act of intentionally harming a human—even for the greater good—creates a crippling cognitive dissonance in Giskard's positronic brain and the conflict between the First Law and the Zeroth Law overloads his brain, leading to his deactivation—his "death."

    Before shutting down, Giskard transfers his telepathic abilities and ethical insights to Daneel - Asimovs main Robot character, allowing Daneel to carry forward the protection of humanity—eventually guiding it through the Foundation era for more than 10.000 years.

    Asimov was remarkably prescient in many ways. The ethical dilemmas surrounding AI and robotics that we are grappling with today—and will continue to face in the years ahead—were already being explored in his work decades ago. He had an extraordinary grasp of human psychology and a deep understanding of how societies are formed, evolve, and ultimately risk collapse. His ability to anticipate both technological and sociopolitical challenges makes his writing feel more relevant now than ever.

    Giskard would have done something about climate change. I think that's the kind of stuff Asimov intended of the zeroth law. I'm not sure if Gishard would have tried to stop WWII, that war, as bad as it was, was not a threat to humanity.

    Nor do I see AI's on the Internet as being that kind of threat either.
     
    Robots and Empire is a good starting point - but be aware - his writing is heavily addicting. That book is the critical evolution point in his universe and many of the topics on AI ethics and evolution is covered in this book. You may miss some of Daneels background history but it is not critical.

    You may try an audible version - maybe that helps with the pain issue. I personally don't like audibles because the voice interfere with my image of the characters.
    I'll give Robots and Empire a go. I'll get an e-version of it. Not physically holding the book will help. Thanks.
     
    But it also does not necessarily mean that they are out to get you :)

    GIGO – absolutely. That’s precisely why strong legislation around AI is essential—to ensure that the data we feed into these systems is reliable, ethical, and used responsibly.

    One area where AI is already making a significant impact is in the medical field. Thanks to its powerful pattern recognition capabilities, AI is now helping detect serious illnesses earlier than ever before. It can identify subtle indicators that might be missed by the human eye, and it can flag patients with certain diagnoses who are at greater risk of severe or fatal outcomes. Early identification like this has the potential to save countless lives by enabling timely interventions and more targeted treatments.

    AI is increasingly used in surgery to enhance precision, especially in procedures where human hands alone aren't accurate enough. Through robot-assisted systems, AI helps eliminate tremors, guides tools with sub-millimeter accuracy, and adapts to real-time anatomical changes. It’s already transforming fields like neurosurgery, eye surgery, orthopedics, and cardiac procedures. AI is also used for surgical planning, navigation, and predicting outcomes. While fully autonomous surgeries are still in the research phase, AI-augmented surgery is already a reality—leading to safer, less invasive procedures and faster patient recovery.

    My father had a partial blockage removed from his heart in November 2024—something surgeons had previously refused to operate on due to its difficult location. Thanks to new AI-assisted technology, they were finally able to perform the procedure safely and successfully.
    AI is a great tool for finding patterns in any data. Chat AI's look for patterns too, but the data they are drawing on is filled with a lot of false and misleading information, and they are designed to respond in a way they think fits what you like based on the patterns in the personal data they have on you. As of right now, chat AI's are the ultimate confirmation bias machines.
     
    I'll give Robots and Empire a go. I'll get an e-version of it. Not physically holding the book will help. Thanks.
    I think you'll enjoy it. It's always been a book near the top of my recommendation list. :)
     
    AI is a great tool for finding patterns in any data. Chat AI's look for patterns too, but the data they are drawing on is filled with a lot of false and misleading information, and they are designed to respond in a way they think fits what you like based on the patterns in the personal data they have on you. As of right now, chat AI's are the ultimate confirmation bias machines.
    That's a silly statement, blame the tool for what persons have done. The tool didn't create that false and misleading information, you as a person are not in every case able to sort all of the bad infomanti from the good even if you try with all your time and ability.

    I certainly have not been able to screen away all of that bad information, an AI will also fail to do this impossible task.
     

    Create an account or login to comment

    You must be a member in order to leave a comment

    Create account

    Create an account on our community. It's easy!

    Log in

    Already have an account? Log in here.

    General News Feed

    Fact Checkers News Feed

    Back
    Top Bottom