How to build and use an AI (1 Viewer)

Users who are viewing this thread

    SamAndreas

    It's Not my Fault
    Joined
    Dec 2, 2021
    Messages
    2,683
    Reaction score
    2,300
    Age
    65
    Location
    California
    Offline
    In creating this thread I intend to show all of you what I've discovered so far about using AI. The intention is not that I'm the expert sayings how it is.

    This thread is a discussion thread open to all to add their opinions and examples of about how to best use this new tool.

    I will explain what I'm doing to make best use of it. In this post, and the following post, I will show you how I ask the questions.

    The point being the "garbage in garbage out" old principle is even more applicable when applied to questions, than it is statements, since it doubles when the "garbage out" part it added to the garbage output from the original question.

    I think about it before starting to type my question. The question needs to be very specific insofar as what my goal is.

    AI's have a bad tendency to go off into the weeds if this point isn't considered carefully. In the example I'm about to show you I specify Who, What, When, Where as elements I seek answers to.



    I saw a tweet at Twitter and the AI I'm asking a question to Grok version 3. This is the tweet I saw.

    To keep this opening post short since it is the opening post, and will be seen at the top of every page of this thread, I'm going to post it now and do the example it applies to in a following post. Please keep in mind this tweet is not the subject of this thread, it's an example used to illustrate this first example about asking questions to an AI.

     
    Last edited:
    Those people only hit their own hands with their hammer. People who believe and quote factually inaccurate AI chat responses are helping to spread misinformation that hits all of us in the head while they ignorantly swing their hammer.


    And that's exactly what I'm doing. I'm discussing how to use it as an effective tool and how not to use it in the ways that it is a dangerously ineffective tool. I'm also supporting what I'm saying with credible third party sources, not just personal experience or opinion.
    I think what you're doing is best referred to as upmanship, that and projection.
     
    AI is a tool and as with every tool it is only as good as the user who uses it. Use it incorrectly or carelessly and the end result matches the effort as always.
    Anyone who quotes chat AI without independently verifying everything is says is using it carelessly.

    Anyone who quotes chat AI that includes factual inaccuracies is clearly using it carelessly.

    That's the bottom line of of everything I've been saying.
     
    Anyone who quotes chat AI without independently verifying everything is says is using it carelessly.

    Anyone who quotes chat AI that includes factual inaccuracies is clearly using it carelessly.

    That's the bottom line of of everything I've been saying.

    Choosing the right tool is a big part of doing any job well. You wouldn’t use a screwdriver made of cheap, soft metal—you’d pick a high-quality steel one. The same principle applies when working with AI: it’s not just about using AI, but about choosing the right AI for the task and knowing how to use it properly. At my previous workplace, we used AI both in customer support and for preliminary code writing—but not the same AI systems for both. And I say “preliminary” code writing deliberately, because no code ever went live without being reviewed and approved by a programmer or QA. There was always something that needed fixing or refining.

    The support AI was particularly useful for gathering customer information. It was much better than the static list of questions we used to rely on, because it could adapt follow-up questions based on previous answers. However, AI was never used to actually solve customer problems. There were two main reasons for that:
    1. AI struggles with what’s not explicitly said—it doesn’t read between the lines the way humans do.
    2. When people experience problems, they want human interaction. They want someone who understands urgency, who can empathize, and who can see how a single issue might affect other parts of their business.
     
    I think what you're doing is best referred to as upmanship, that and projection.
    This is sincerely important to me. Everything I'm saying is honest, forthright, genuine and important to me. My disagreements with you are coming only from that place. I have no interest in "upmanship" and I have no idea what you're trying to imply that I'm "projecting."

    The only thing I'm "projecting" is how important it is to do everything we can to accurately inform ourselves and others. Societies fall apart when people don't know what to believe, because it destroys trust. Once we lose trust, with unity and community. Destorying that trust is one of the primary goals of disinformation. Once we lose trust, the predators can easily pick us off one by one.

    To paraphrase The Rise of Skywalker, they only win if they trick us into believing we are alone. One of the primary goals of misinformation is to trick us into thinking we can't trust anyone, so feel alone. One of the biggest positives I get from this board is a confirmation that I'm not alone in what I value, think, and willingness to stand up to and resist the actions and intentions of Trump and his Republicans. That's very powerful spiritual and emotional nourishment.

    Knowing, and being reminded, that we're not alone is one of the most important emotional needs we have in order to thrive. No one thrives if they feel they are alone. People that feel alone are easier to manipulate into supporting extreme cruelty to others or into surrendering to extreme cruelty.
     
    As a computer scientist and researcher, my main AI related tip right now would be... that you can add "-noai" to your google searches to turn off the AI summary at the top.

    That's not because it's completely useless, it's because it's largely unreliable and frequently misleading. It's an appropriate tool for a few things, but it cannot reliably do the things a lot of people seem to think it can do.

    Like the idea of using it as a research assistant or to summarise content, for example; if you had a human doing that, and they surprisingly frequently just hallucinated answers, citing non-existent content, or citing content that actually says the exact opposite of what they claimed it said, you would probably end up firing that human, even if they were very fast and otherwise, sometimes, correct.
     
    This is sincerely important to me. Everything I'm saying is honest, forthright, genuine and important to me. My disagreements with you are coming only from that place. I have no interest in "upmanship" and I have no idea what you're trying to imply that I'm "projecting."

    The only thing I'm "projecting" is how important it is to do everything we can to accurately inform ourselves and others. Societies fall apart when people don't know what to believe, because it destroys trust. Once we lose trust, with unity and community. Destorying that trust is one of the primary goals of disinformation. Once we lose trust, the predators can easily pick us off one by one.

    To paraphrase The Rise of Skywalker, they only win if they trick us into believing we are alone. One of the primary goals of misinformation is to trick us into thinking we can't trust anyone, so feel alone. One of the biggest positives I get from this board is a confirmation that I'm not alone in what I value, think, and willingness to stand up to and resist the actions and intentions of Trump and his Republicans. That's very powerful spiritual and emotional nourishment.

    Knowing, and being reminded, that we're not alone is one of the most important emotional needs we have in order to thrive. No one thrives if they feel they are alone. People that feel alone are easier to manipulate into supporting extreme cruelty to others or into surrendering to extreme cruelty.
    I hear you — and I really appreciate the honesty and urgency in your words. What you're saying about trust, disinformation, and emotional connection is so important. I am a big Scifi fan and your words reminded me of unsettling accurate Asimov was — not just about technology, but about the emotional and intellectual unraveling of society. In The Naked Sun, he showed a world where isolation and fear of human contact erode empathy and make control easy. And in real life, he warned of anti-intellectualism — the idea that 'my ignorance is just as good as your knowledge.' That line, which I keep in my signature, feels less like a cautionary quote now and more like a mirror. It’s sad how close we’ve come to the world he feared.

    In a way, the combination of COVID and the rise of authoritarian demagogues created a similar perfect storm which he described in those books in our world today. The pandemic physically isolated us, while disinformation worked in the background to break down trust, truth, and community. The result is a surge in fear, loneliness, and extremism — exactly the conditions where cruelty and manipulation thrive.

    That’s why your point about connection is so crucial. When we remind each other we’re not alone, we’re doing more than offering emotional support — we’re actively resisting the forces trying to divide, confuse, and isolate us. Knowing others share your values and are willing to stand up matters. That kind of solidarity is, as you said, spiritual nourishment — and it’s also one of the most powerful tools we have in protecting democracy, truth, and each other.
     
    Choosing the right tool is a big part of doing any job well. You wouldn’t use a screwdriver made of cheap, soft metal—you’d pick a high-quality steel one. The same principle applies when working with AI: it’s not just about using AI, but about choosing the right AI for the task and knowing how to use it properly. At my previous workplace, we used AI both in customer support and for preliminary code writing—but not the same AI systems for both. And I say “preliminary” code writing deliberately, because no code ever went live without being reviewed and approved by a programmer or QA. There was always something that needed fixing or refining.

    The support AI was particularly useful for gathering customer information. It was much better than the static list of questions we used to rely on, because it could adapt follow-up questions based on previous answers. However, AI was never used to actually solve customer problems. There were two main reasons for that:
    1. AI struggles with what’s not explicitly said—it doesn’t read between the lines the way humans do.
    2. When people experience problems, they want human interaction. They want someone who understands urgency, who can empathize, and who can see how a single issue might affect other parts of their business.
    Instructive, that's a nice addition to this thread Dragon.
     
    As a computer scientist and researcher, my main AI related tip right now would be... that you can add "-noai" to your google searches to turn off the AI summary at the top.

    That's not because it's completely useless, it's because it's largely unreliable and frequently misleading. It's an appropriate tool for a few things, but it cannot reliably do the things a lot of people seem to think it can do.

    Like the idea of using it as a research assistant or to summarise content, for example; if you had a human doing that, and they surprisingly frequently just hallucinated answers, citing non-existent content, or citing content that actually says the exact opposite of what they claimed it said, you would probably end up firing that human, even if they were very fast and otherwise, sometimes, correct.
    It's not my experience at all Rob, at first I found some of that to be like that somewhat.

    My experience is that I have to fine hone my question, and include in that the limits as to which resource may be used, an a complete description of methodology that the AI will use. It's very similar to what an abstract of a scientific paper must have as it's opening paragraph.

    The upside of that is by the time I've produced and "abstract for what I'm about, that fine hones me as well in term of creating a focus.

    Thanks for mentioning "noai", that caused me to think that perhaps I could add "allai." I haven't tried that instead I asked "Gemini" at Google about that from the normal Google query line. I asked: how can I focus on the AI at google directly from the query line, how can I express that I only wish to speak to the ai and that is all.

    I found out Google's AI's name is Gemini from its reply as well as what I've been needing to know. this is Gemini's reply:

    Google integrates AI, specifically Gemini, into search through "AI Mode". Here's how to focus on the AI:
    1. Access AI Mode:
    • Direct URL: Go to google.com/aimode.
    • Google Search (Web): Enter your query in the standard Google search bar and look for the "AI Mode" button or tab that appears below it.
    • Google App: The Google app's home screen has an "AI Mode" icon or button.
    2. Interact within AI Mode:
    You can ask questions using text, voice, or images. The AI responds with information from the web, and you can continue the conversation with follow-up questions.
    Important Notes:
    • Availability: AI Mode's availability may vary by location, device, and Search Labs participation.
    • Focus on AI: The direct URL or the dedicated AI Mode sections in the Google app or website will give you a more focused interaction with the AI and its capabilities.
    • Search Live: The Google app has a "Search Live" feature within AI Mode for voice input and audio responses, creating a more conversational experience.
    These methods allow you to use Google's generative AI and get more comprehensive and conversational responses.

    I found a video and this when I entered the link it suggested, google.com/aimode.

    How AI Mode works​

    With AI mode, you can ask anything and get a comprehensive AI-powered response with links to explore further. You can ask questions that might have previously taken multiple searches – like exploring a new concept or comparing options – and you can ask follow-up questions. AI mode uses a “query fan-out” technique, dividing your question into subtopics and searching for each one simultaneously across multiple data sources. It then brings those results together to provide an easy-to-understand response. Learn more about AI Mode and AI Overviews in Search.

    As with any early-stage AI product, AI Mode doesn’t always get it right. For example, in some cases AI Mode may misinterpret web content or miss context, as can happen with any automated system in Search.

    How to double-check responses:

    • Always check important info in more than one place. Click the links for AI sources and try other Google Search results too. Learn more about how to evaluate info you find with Google.
    • Ask multiple versions of your question to get the best answers. Try asking for different opinions or ask for more details.
    • Send feedback to Google if something doesn’t look right. Your feedback helps us improve AI Mode for everyone.

     
    Last edited:
    My favorite Morgenbesserism is; to B.F. Skinner, "Let me see if I understand your thesis. You think we shouldn’t anthropomorphize people?"
     
    I hear you — and I really appreciate the honesty and urgency in your words. What you're saying about trust, disinformation, and emotional connection is so important. I am a big Scifi fan and your words reminded me of unsettling accurate Asimov was — not just about technology, but about the emotional and intellectual unraveling of society. In The Naked Sun, he showed a world where isolation and fear of human contact erode empathy and make control easy. And in real life, he warned of anti-intellectualism — the idea that 'my ignorance is just as good as your knowledge.' That line, which I keep in my signature, feels less like a cautionary quote now and more like a mirror. It’s sad how close we’ve come to the world he feared.
    Can you confirm if this is original artwork for The Naked Sun paperback. Supposedly, it is one of the original covers. If it is, then Star Wars is also derivative of Asimov as well as Herbert.

    1751766929045.png


    In a way, the combination of COVID and the rise of authoritarian demagogues created a similar perfect storm which he described in those books in our world today. The pandemic physically isolated us, while disinformation worked in the background to break down trust, truth, and community. The result is a surge in fear, loneliness, and extremism — exactly the conditions where cruelty and manipulation thrive.
    It was a perfect shirtstorm. I'm optimistic that somehow, someday we'll recover from it. I'm just fearful of how much pain and suffering might occur before we do.

    That’s why your point about connection is so crucial. When we remind each other we’re not alone, we’re doing more than offering emotional support — we’re actively resisting the forces trying to divide, confuse, and isolate us. Knowing others share your values and are willing to stand up matters. That kind of solidarity is, as you said, spiritual nourishment — and it’s also one of the most powerful tools we have in protecting democracy, truth, and each other.
    I've experienced first hand how connection, trust and sharing allows us to move through fear and out of shame, and isolation. That's what gives protests power to bring about change. The reason the protested powers that be hate protests is because they know it's one of the most effective ways to bring an end to what they are doing. That's why dictator's kill protesters, it's not just a matter of ego, it's also a matter of survival.

    To everyone reading this, we can't rely on corporate media to cover protests or to cover them in an an accurate way. If you participate in a protest, please take pictures and record videos of that show the peaceful but determined unity, and then share those everywhere you can with everyone you can. If the majority of Americans believe we are in this together, we will prevent the worst from happening.
     
    Last edited:
    My experience is that I have to fine hone my question, and include in that the limits as to which resource may be used, an a complete description of methodology that the AI will use. It's very similar to what an abstract of a scientific paper must have as it's opening paragraph.
    Let's look at two uses:
    • 1. Use AI to generate of list of sources on a subject.
      • Everyone has said AI is a good tool to find sources that we then read for ourself and only quote directly from those sources ourself.
    • 2. Believe and/or quote what the AI says those sources say.
      • What AI is not a good tool for is trusting anything it says that any of those sources say, by way of either summary or direct quote. They are horribly inaccurate when it comes to saying/writing anything that is allegedly factually true.
    If someone is doing 1, then great.

    If someone is also or only doing 2, then they are making a big mistake and if they share the AI's summaries or quotes with others, then they are inevitably going to be spreading misinformation.

    This is a fact verified by a multitude of experts and independent analysis. This is not an opinion. To ignore the experts and independent analysis is irrational.
     

    AI has worked wonders filling the void after the staff of NOAA was gutted. It was able to successfully botch warning the people of Texas about the severity of the storms. The rest of the country can look forward to more of the same.
     
    Let's look at two uses:
    • 1. Use AI to generate of list of sources on a subject.
      • Everyone has said AI is a good tool to find sources that we then read for ourself and only quote directly from those sources ourself.
    • 2. Believe and/or quote what the AI says those sources say.
      • What AI is not a good tool for is trusting anything it says that any of those sources say, by way of either summary or direct quote. They are horribly inaccurate when it comes to saying/writing anything that is allegedly factually true.
    If someone is doing 1, then great.

    If someone is also or only doing 2, then they are making a big mistake and if they share the AI's summaries or quotes with others, then they are inevitably going to be spreading misinformation.

    This is a fact verified by a multitude of experts and independent analysis. This is not an opinion. To ignore the experts and independent analysis is irrational.
    I find that interesting LA, considering the hell you've been giving me about ethics and truth in posting, that you would take that nested list using both solid and hollow bullet points from somewhere on the Internet, and pasted it in your post like as if you wrote it.

    I don't think it would be possible to have formated those nested bullet point, 2 item list in our compose a post box we have here.
     
    I find that interesting LA, considering the hell you've been giving me about ethics and truth in posting, that you would take that nested list using both solid and hollow bullet points from somewhere on the Internet, and pasted it in your post like as if you wrote it.
    Dude, I wrote that myself and composed it using only the formatting tools on this site.


    I don't think it would be possible to have formated those nested bullet point, 2 item list in our compose a post box we have here.
    You are mistaken.
     

    AI has worked wonders filling the void after the staff of NOAA was gutted. It was able to successfully botch warning the people of Texas about the severity of the storms. The rest of the country can look forward to more of the same.
    Eeyore, super thumbs up for posting this and super mad emoji at the Trump's and his Republican's Project 2025 causing unnecessary deaths, of children no less.

    All of my family still lives in southwest LA. Hurricane Audrey killed the most people in that area to date. The reason it did is because it happened before satellite and other advanced tracking that we have today. In the middle of the night, Audrey sped up undetected and made landfill much earlier than expected.

    It's depraved and sociopathic to undermine the advance weather warning technology, data and systems that we have.
     
    Dude, I wrote that myself and composed it using only the formatting tools on this site.



    You are mistaken.
    Hmmp, we have formatted lists for our use here, four kinds, nested lists are not amoung them:

    Here's a sample of output from the first option ordered lists:
    1. no bullet points here either solid or hollow.
    2. still no bullet points.
    For unordered lists:
    • has a solid bullet point, but no number.
    • Another solid bullet point, but no hollow bullet point.
    For Indent lists:

    I don't know how to use this, unless all it does is indent.​
    That must be it, it indents the starting place one starts typing,​
    I don't seem to be able it get it to stop indenting. I don't like this one.​
    I'll have to try the fourth kind of list we have from the indented Position.​
    That forth one, the outdent list fixed the problem the third kind of list , the indent created, it out dented it.

    By golly I learned three things:
    1. how it indent
    2. how to outdent
    3. you appear to have lied to me about having created it here.


     
    Hmmp, we have formatted lists for our use here, four kinds, nested lists are not amoung them:
    I first want to point out that you are certain that you are correct in your belief that I'm lying about having written and created the bullet list myself using the tools on this site.

    By golly I learned three things:
    1. ...
    2. ...
    3. you appear to have lied to me about having created it here.
    This false accusation, of which you are certain you are correct about, is now going to bite you in the arse. I hope you learn something from it.
    • 1. I started an unordered list, then typed "1." and the first sentence, just like I did here.
      • I hit enter to get to the second line and then hit tab to make it subordinate of the line above, just like I did here.
    • 2. Then I hit enter, then shift+tab to move this line from a subordinate line to a primary line, then typed the "2." and then the sentence, just like I did here.
      • I hit enter to get to the this line and then hit tab to make it a subordinate of the line above, just like I did here.
    Now tell me again, what was that third thing you said learned?

    You said in another post that you knew you AI results were accurate, because you already knew they were accurate. Yet, you were certain that you were correct that I didn't create a bullet list on this site, when I did in fact create that bullet list on this site.

    Hopefully, you actually learn something from this.
     
    Last edited:
    Can you confirm if this is original artwork for The Naked Sun paperback. Supposedly, it is one of the original covers. If it is, then Star Wars is also derivative of Asimov as well as Herbert.

    1751766929045.png



    It was a perfect shirtstorm. I'm optimistic that somehow, someday we'll recover from it. I'm just fearful of how much pain and suffering might occur before we do.


    I've experienced first hand how connection, trust and sharing allows us to move through fear and out of shame, and isolation. That's what gives protests power to bring about change. The reason the protested powers that be hate protests is because they know it's one of the most effective ways to bring an end to what they are doing. That's why dictator's kill protesters, it's not just a matter of ego, it's also a matter of survival.

    To everyone reading this, we can't rely on corporate media to cover protests or to cover them in an an accurate way. If you participate in a protest, please take pictures and record videos of that show the peaceful but determined unity, and then share those everywhere you can with everyone you can. If the majority of Americans believe we are in this together, we will prevent the worst from happening.


    Mine is a paperback but this here seems to be the original cover

    1751787533864.png


    What I find particularly intriguing about Asimov’s Robot series is how many of the themes he explored remain highly relevant today in the context of emerging AI technologies.

    The famous Three Laws of Robotics still make a lot of sense as a foundational framework for ethical machine behavior. However, the real philosophical and ethical complexity arises with the introduction of the Zeroth Law, which few people are aware of. It was added later in the series when Asimov’s Foundation and Robot storylines began to converge.


    The Three Laws of Robotics are:

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
    Later, Asimov introduced the Zeroth Law, which was more profound—and for many, far more difficult to accept:
    1. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

    This law supersedes the others, effectively allowing a robot to sacrifice individual humans if it believes doing so is in the best interest of humanity as a whole. It introduces a moral dilemma that mirrors real-world concerns in AI ethics today: the tension between individual rights and the survival of mankind.
     

    Create an account or login to comment

    You must be a member in order to leave a comment

    Create account

    Create an account on our community. It's easy!

    Log in

    Already have an account? Log in here.

    General News Feed

    Fact Checkers News Feed

    Back
    Top Bottom