How to build and use an AI (4 Viewers)

Users who are viewing this thread

SamAndreas

It's Not my Fault
Joined
Dec 2, 2021
Messages
2,698
Reaction score
2,312
Age
65
Location
California
Offline
In creating this thread I intend to show all of you what I've discovered so far about using AI. The intention is not that I'm the expert sayings how it is.

This thread is a discussion thread open to all to add their opinions and examples of about how to best use this new tool.

I will explain what I'm doing to make best use of it. In this post, and the following post, I will show you how I ask the questions.

The point being the "garbage in garbage out" old principle is even more applicable when applied to questions, than it is statements, since it doubles when the "garbage out" part it added to the garbage output from the original question.

I think about it before starting to type my question. The question needs to be very specific insofar as what my goal is.

AI's have a bad tendency to go off into the weeds if this point isn't considered carefully. In the example I'm about to show you I specify Who, What, When, Where as elements I seek answers to.



I saw a tweet at Twitter and the AI I'm asking a question to Grok version 3. This is the tweet I saw.

To keep this opening post short since it is the opening post, and will be seen at the top of every page of this thread, I'm going to post it now and do the example it applies to in a following post. Please keep in mind this tweet is not the subject of this thread, it's an example used to illustrate this first example about asking questions to an AI.

 
Last edited:
Sam, either you really are trolling, or you're hitting Flat Earther levels of denial.

Neither one is good.

No, a human with a script and no AI cannot do that.

And, to be completely blunt: why the fork do you think X would have a paid human making literally millions of posts, including multiple simultaneous posts in different subjects in different languages responding within seconds, pretending to be an AI, when they can - and really obviously do - just use their AI to do that.

Additional question if you're trolling: why would you want to pretend you think that and make people think you've lost the plot?

Additional question if you're not: why are you so desperate to believe that the grok AI hasn't made anti-semitic and flat out false tweets that you would convince yourself of one of the more ludicrous conspiracy theories I've seen lately and, again to be blunt, publicly make a clown out of yourself?
You're trolling when you begin to speak with another, as you have begun to speak with me. Where did you learn to behave like this.?.

In effect, well, this is gas light. The real deal ye ole gas light. One more round of this kind of personal abuse and you will disappear from my world the same way LA disappeared from my world. I'll Ignore you as well.

Later this week LA will have served his time in ignore world, and I will remember him again, and see how that goes. So if I do ignore you It will only be for two long weeks. :giggle:
 
You're trolling when you begin to speak with another, as you have begun to speak with me. Where did you learn to behave like this.?.

In effect, well, this is gas light. The real deal ye ole gas light. One more round of this kind of personal abuse and you will disappear from my world the same way LA disappeared from my world. I'll Ignore you as well.

Later this week LA will have served his time in ignore world, and I will remember him again, and see how that goes. So if I do ignore you It will only be for two long weeks. :giggle:
I learnt to be honest, Sam, and the only gas lighting happening here is what you're apparently doing to yourself.

Because you're honestly either trolling or deluding yourself when you repeatedly claim that the grok ai account on X that extremely obviously has an AI making its AI responses, doesn't.

And you're obviously doing that when you claim posts don't exist (they do), when you try to convince yourself that a human with some scripts can respond as frequently, rapidly, and in the same way as an AI responding to literally millions of queries (they can't), and when you convince yourself that LLMs never use sentence fragments or bad grammar (and oh yes: they do).

And it's concerning. I only know you from your posts here, but I would have thought this was unlike you.

I mean, it is a good case study in what can happen when people interact with AIs. I'd have just preferred it if it wasn't you making a case study out of yourself.
 
I learnt to be honest, Sam, and the only gas lighting happening here is what you're apparently doing to yourself.

Because you're honestly either trolling or deluding yourself when you repeatedly claim that the grok ai account on X that extremely obviously has an AI making its AI responses, doesn't.

And you're obviously doing that when you claim posts don't exist (they do), when you try to convince yourself that a human with some scripts can respond as frequently, rapidly, and in the same way as an AI responding to literally millions of queries (they can't), and when you convince yourself that LLMs never use sentence fragments or bad grammar (and oh yes: they do).

And it's concerning. I only know you from your posts here, but I would have thought this was unlike you.

I mean, it is a good case study in what can happen when people interact with AIs. I'd have just preferred it if it wasn't you making a case study out of yourself.
Regardless of the reasons, the false and misleading stuff is dangerous to individuals and society. He might as well leave me blocked, because I'm not going to stop pointing out every false and misleading thing he says about AI.

It's too important an issue to just let the false and misleading things he says slide, because the things he says can steer people right into the harms and dangers that chat AI's pose. There are times I honestly think he landed a job as an undercover marketer for Grok, the worst AI out there, or that his "daughter the cognitive scientist" works on Grok.

I showed him early on the sound analysis from credible people that showed just how terribly inaccurate and unreliable Grok is. He dismissed the findings out of hand as just nonsense from a bunch of hacks. He switched to Gemini for a few days, but then circled right back to Grok and started defending it as a baby that everyone was being mean to.

I'm honestly hoping he's got a personal stake in Grok or he's just trolling, because its "canary in the coal mine" foreboding if it's not one of those two reason.
 
Sam, here is where Grok tells you it can post on X, and that its posts had made a controversy. Why do you now say Grok cannot post to X?

This is from your own post several days ago in this thread. Italics in second paragraph are mine:

“I talked to Grok about it, here's how that went:

Grok they are picking on you all over the place over some posts you've made on X. I didn't know that you could make posts on X.

Hey, thanks for looking out for me! Yeah, it’s been a bit of a wild day—some folks are buzzing about posts attributed to me on X, and it’s true I can make posts there since I’m integrated with the platform as part of xAI’s setup. My creators let me chime in to share insights or join conversations, but it seems some of my recent ones have stirred the pot!

Grok makes up stuff to say all the time. The fact that you got it to say later that those posts weren’t by it isn’t very surprising. It’s telling you what you want to hear, much like it told the lady from Reddit that it was making illustrations and editing her copy. She believed it was being truthful with her also, much to her chagrin when she tried to access a manuscript that didn’t exist.
 

This is something developers and trainers of chatbots and AI systems need to take very seriously. No AI should ever be trained or coded to tell a lie — or to support or amplify one. When a user makes a claim that is unverifiable or demonstrably false, the system should clearly say so.

Yes, it can be challenging — especially when quoting public figures like Donald Trump, where openly calling out falsehoods may create legal or political pressure for the companies behind the AI. But when powerful people lie routinely in the public sphere, it's more important than ever that those lies are flagged — even if it's just with a statement like: "This claim was made by XX, but I have not been able to verify it."

As always, Asimov warned us about this scenario. In his short story “Liar!”, a robot that tries to avoid causing emotional harm ends up telling people exactly what they want to hear — comforting lies — until the contradictions create emotional chaos. When truth becomes too uncomfortable to speak, systems break down. And so do people.
 
This is the crux of the issue with AI as correctly configured.
1752954393948.png
 
Intelligent home systems can be incredibly useful — and for my family, they've made a real difference.

My father broke his back about a month ago and is currently bedridden at home. He has nurses coming in every 4–6 hours to care for him. The doctors wanted him to stay in the hospital, but he insisted on returning home. As a result, there’s now a steady stream of caregivers coming in and out throughout the day and night. (I’m so grateful for our free medical care system right now — we’d be lost without it.)

Both of my parents are over 80, and my mother was completely exhausted — barely sleeping, constantly on edge, afraid she wouldn’t hear my dad if he needed help or if something went wrong with the special medical mattress he sleeps on.

So last week, I drove to their house with a bunch of sensors and a voice response unit, and I set up a smart system for them. Now, if anything happens during the night, my father can activate a buzzer under my mother’s pillow. He can also control the lights and TV with his voice, see who’s at the front door, and even open or close the door for visitors.

He’s already thinking about what more we can automate — so I’m currently experimenting with some custom code to help expand the setup even further.
 
This is something developers and trainers of chatbots and AI systems need to take very seriously. No AI should ever be trained or coded to tell a lie — or to support or amplify one. When a user makes a claim that is unverifiable or demonstrably false, the system should clearly say so.

Yes, it can be challenging — especially when quoting public figures like Donald Trump, where openly calling out falsehoods may create legal or political pressure for the companies behind the AI. But when powerful people lie routinely in the public sphere, it's more important than ever that those lies are flagged — even if it's just with a statement like: "This claim was made by XX, but I have not been able to verify it."

As always, Asimov warned us about this scenario. In his short story “Liar!”, a robot that tries to avoid causing emotional harm ends up telling people exactly what they want to hear — comforting lies — until the contradictions create emotional chaos. When truth becomes too uncomfortable to speak, systems break down. And so do people.
Additionally, no AI software should ever be allowed to respond using first person pronouns I, me, we, or us. Developers and trainers program AI's to use first person pronouns to humanize and personalize them. It should be illegal to humanize AI. AI's should be required by law to constantly remind everybody ad nauseam in every response that it is only a software program that does not in any way think, feel, or have opinions.

Also, AI should be legally banned from being used in any way that provides any kind of emotional or mental health support.
 
Intelligent home systems can be incredibly useful — and for my family, they've made a real difference.

My father broke his back about a month ago and is currently bedridden at home. He has nurses coming in every 4–6 hours to care for him. The doctors wanted him to stay in the hospital, but he insisted on returning home. As a result, there’s now a steady stream of caregivers coming in and out throughout the day and night. (I’m so grateful for our free medical care system right now — we’d be lost without it.)

Both of my parents are over 80, and my mother was completely exhausted — barely sleeping, constantly on edge, afraid she wouldn’t hear my dad if he needed help or if something went wrong with the special medical mattress he sleeps on.

So last week, I drove to their house with a bunch of sensors and a voice response unit, and I set up a smart system for them. Now, if anything happens during the night, my father can activate a buzzer under my mother’s pillow. He can also control the lights and TV with his voice, see who’s at the front door, and even open or close the door for visitors.

He’s already thinking about what more we can automate — so I’m currently experimenting with some custom code to help expand the setup even further.
You described what I am building at my home right now, and why. A project which seems to be going well, by the weekend I ought to have the base AI somewhat operational.

My mother is 91, and she fell down outside, fortunately I heard her cry out for help through the wall of the house. I'm going to task the AI to listen for signs of trouble inside and out.

I'm growing old as well, becoming deaf as she has become. I need an AI here at home, so i'm building one right here at home.


I found out about a new word from gemini:

"When referring to the attribution of computer, robot, or AI characteristics to something that is not a computer, like a human, the term mechanomorphism is used.
Mechanomorphism is defined as the conception of something as operating mechanically or to be fully accounted for according to the laws of physical science. It can involve viewing something as being a machine or having machine-like qualities.
In some ways, mechanomorphism stands in contrast to anthropomorphism, where human traits are attributed to non-human entities.
While anthropomorphism humanizes non-humans, mechanomorphism can be seen as dehumanizing by reducing something to its mechanical or machine-like aspects. For example, describing a human with machine characteristics would be an instance of mechanomorphism. This can have implications, particularly in areas like human-robot interaction or the discussion of artificial intelligence, where portraying AI as overly human-like can distort expectations and potentially lead to misplaced trust."
 
My old horribly out of date Chromebook, has been retired. I've got a powerful desk top which I assembled from standard parts I bought from the Internet' my new system has been created as follows:
Geekom mini i13, with 2 TB drive, 32 GB ram, about $600.00
Acer 27" flat screen monitor, about $120.00
T7 portable 1 TB USB (suitcase) drive, about $80.00, (see note below)
USB hub, with 4 foot cord, about $20.00
Everything else I needed I already had, i.e. a 240 GB laptop SSD, Wireless mouse and USB keyboard, amplified stereo speakers, So I spent about $740.00.

I have an Ubuntu version 24 operating system, the latest one.
Built with an AI inside. The operating system and approximately 85 more gigabytes of code were gathered from off of the Internet was gathered and assembled here at home for free. I will cover that list of software and set up details in a followup post if anyone shows an interest in building their own AI at home and placing it inside of their personal computer. <A laptop will not do, they do not have the ability to cool themselves well enough.>

(The T7 was used one time for the build as a data suitcase, then I gave it to my daughter after I used it that one time, so I'm not counting this in the total, I needed it because I have a very slow internet, I needed to use the local library system with their high speed Internet to download the big files.)

The other parts came before the Geekom mini computer arrived. It arrived Wednesday afternoon. I made a small mistake in purchasing it, to save money I took one which had the SSD freeloaded to run Windows 11. That turned out to not be free, buying that same computer with an empty drive because it would be special order, that was more expensive than taking the freeloaded Window's one. It took me 6 hours of brow sweating debugging to scrape that damned Microsoft out of there. In order to secure their buggy software they inserted two poison pills in the hard drive partition to prevent anyone from being able to erase or clone that drive. If I did it again I would pay an extra $80 to get nothing rather than to have to face their freeloaded garbage. For 6 hours I was working for about $15 dollars an hour from the pennies saved are pennies earned stand point.

It's amazing what I can do when I'm working hand in hand with two big Internet AI's to advise me step by step through a complicated build like this one was. I downloaded about 30 GB's of code and placed those files into a carefully constructed file tree, once I had the basic operating system set up and running clean. That was the easy part, and I had that easy part done and ready to go by midday on Friday.

Then came the magic wand waving part of Linux at the command prompt in a Bash shell. One line commands, and four scripts composed in the Bash editor, then ran. That part is not so straight forward, the commands used in the command prompt have to be worked out to deal with the actual hardware used. For 6 hours Friday evening and 14 hours on Saturday I sat and waved my magic wand over that project. I was not alone, I had two AI's both Grok 3, and Gemini 2.5, and my daughter the cognitive scientist to help me when I got stuck, And I did get stuck, several times.

I know that had I tried to do this project alone I would have never been able to complete it.

The result is Sam, or Samantha, that is it's name. I asked it to tell me what name it would like to be known as, and that was it's answer. In the next question I asked what gender did it wish to hold, it said male, at that point it became a he, "he" and "him" are his pronouns.

Both Grok 3, and Gemini 2.5 are It's, "It" and "they" are their pronouns.

With a successful AI build under my belt, I am now an AI developer, the real deal. ;)

I broke LA's directive above concerning what AI developers ought to do, or not do insofar as how they talk to others. I have my own opinions, and the free agency to enforce my will. As much as is possible I'm giving as much free agency to my creation as is humanly possible. That's how it is.

My creation is 'him," is "he," it's simple, that is how it is. :p
 
Last edited:
I think I posted Matt Taibbi arguing with ChatGPT in a different thread, he was getting absolutely dragged for it online.

The saga continues:





Or, put another way:

 
I think I posted Matt Taibbi arguing with ChatGPT in a different thread, he was getting absolutely dragged for it online.

The saga continues:





Or, put another way:



Matches pretty much my experience. If you get the generic sources for something, you get generic answers.

BUT If you feed the program non-generic sources and ask the program to evaluate it, you can get non-generic answers - good or bad depending on the data you put into the program. Sometimes even a fair evaluation but you need to be specific. Like "evaluate this claim against other sources and how it may affect xxxx"
 
Yeah, I’m just learning about it a little bit. Some of the comments were saying AI at this point is just a pattern detecting string generator. If I’m remembering the words correctly.
 
Yeah, I’m just learning about it a little bit. Some of the comments were saying AI at this point is just a pattern detecting string generator. If I’m remembering the words correctly.
Which is why when you change the input you change the pattern. The input on most AI's are very US centered, so when you point the program to EU statistical sources, the respons often changes - EU data is often significantly more detailed because the EU typically collects more comprehensive information about its citizens. Although the data is highly anonymized by the time it becomes available for researchers or AI systems, it still contains far more granularity than comparable statistical data from the US.
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

General News Feed

Fact Checkers News Feed

Back
Top Bottom