15 replies
I'm late to the game and just recently have started learning about and using AI. But my question is about AI learning.

If you are logged in and use ChatGPT, Gemini, or Copilot and ask a question such as, "I'm afraid I might have cancer, what are the symptoms to watch for?" Does it "learn" that you might have cancer and adjust future answers around that presumption?

I've asked questions about another topic, not cancer, but many future answers *seem* to all include, to some extent, the topic. The answers aren't wrong, but some of them are a little questionable.

For example, use the above prompt. And then an hour later, you ask "What's the best way to get a tan in winter" and it includes skin cancer in the answers. A couple of days later, you ask about your bones hurting, and of course that could be because of bone cancer, along with many other things. Then you ask about a headache, but it doesn't mention cancer. But later you ask about what does WBC blood test mean, and it mentions a form of cancer causing low WBC. On and on.

All of those answers could include cancer. My question is are the answers it gives based on my original prompt or no.

Thanks.
Mark
#learning
Avatar of Unregistered
  • Profile picture of the author Monetize
    It depends on a number of things, like whether you have
    the A.I. tool's memory feature on or not, if it has one.

    Some of them have a memory and some of them do not.

    ChatGPT has a feature where you can tell it what you want
    it to know about you or your project or whatever.

    You can ask your A.I. if it remembers your previous chats.
    For example, "Gemini, do you remember our previous chat
    about such and such?"

    If it does, and you don't want it to remember, you can adjust
    your settings and/or delete your previous chats.

    You should also remember that these conversational A.I.
    tools are experimental. Therefore you should not rely on
    serious medical advice from them, they will probably tell
    you something similar themselves.

    It's fine to ask about simple home remedies for colds, etc.
    but if you actually have cancer you need to seek
    professional medical care.

    FYI, I jumped on the A.I. wagon the second time I heard
    about it in December 2022, I use different tools for various
    purposes like images and video, and highly recommend it.
    {{ DiscussionBoard.errors[11815886].message }}
  • Profile picture of the author Wild Man
    That's one of the rubs with AI, a lot of people believe it is indeed "intelligent."

    It's nothing more than a computer program that gets all it's information from scraping the net. I don't know about others but Grok will list it's sources. Say you have it write an article. It will tell you which sites it sourced and provide links.

    Anyone who takes any kind of "advice" from it are doing themselves a dis-service. But you know people will.

    The double edged sword of technology.

    Wild Man
    {{ DiscussionBoard.errors[11815905].message }}
    • Profile picture of the author Monetize
      Originally Posted by Wild Man View Post

      That's one of the rubs with AI, a lot of people believe it is indeed "intelligent."

      It's nothing more than a computer program that gets all it's information from scraping the net. I don't know about others but Grok will list it's sources. Say you have it write an article. It will tell you which sites it sourced and provide links.

      Anyone who takes any kind of "advice" from it are doing themselves a dis-service. But you know people will.

      The double edged sword of technology.

      Wild Man

      I had a household issue a couple of days ago, and I asked
      ChatGPT if it was something I should be concerned about.

      It answered my concerns and assured me that everything
      was fine. So it does give advice, and the response wasn't
      canned, it was customized to my circumstance.

      Another household question I had previously, I needed to
      glue something and had about six different types of glue
      and adhesive.

      I asked ChatGPT which one would work for my purpose,
      and it gave the correct answer as well as an explanation
      of what all my glues and adhesives were formulated for.

      Again, it was a customized response since what I wanted
      to glue were two unique items.

      I could have researched online, but it's just easier for me
      to ask ChatGPT.

      ChatGPT does not scrape the internet, it has a database
      of knowledge from various sources.

      If you ask it to research something, for example you want
      links, it searches the internet then, for obvious reasons.

      I could tell you dozens of stories of how I have used it
      for business purposes, but I won't.

      ChatGPT nor any of the other A.I. tools I use have done
      me any "dis-service."
      {{ DiscussionBoard.errors[11815954].message }}
  • Profile picture of the author DWolfe
    I have not paid too much attention to it. The other day I used Gemini to look up Layer Switch Internet Exchange and asked several questions about the topic. Then, I asked a few other questions that were not subject-related. So far nothing has been shown trying to sell any products related to LSIE.
    {{ DiscussionBoard.errors[11815959].message }}
  • Profile picture of the author Mark Singletary
    As far as the thinking that AI is just scraping the web, then why do they need all the extra power and servers compared to a web scraping service such as Google Search? I've read recently that AI needs tons of power, and they were talking about how the current power grid maybe couldn't handle it. It seems that they wouldn't need all that for just scraping, but I'm new to all this, so maybe I'm wrong.

    Mark
    {{ DiscussionBoard.errors[11815967].message }}
    • Profile picture of the author Monetize
      Originally Posted by Mark Singletary View Post

      As far as the thinking that AI is just scraping the web, then why do they need all the extra power and servers compared to a web scraping service such as Google Search? I've read recently that AI needs tons of power, and they were talking about how the current power grid maybe couldn't handle it. It seems that they wouldn't need all that for just scraping, but I'm new to all this, so maybe I'm wrong.

      Mark

      Those are the types of questions that you can ask your A.I. tool.

      Just ask it why it needs so much energy.

      It will explain anything under the sun.

      If the output is too technical or whatever, tell it to explain again.

      IMO, it probably needs more electricity for the same reason that
      Bitcoin miners do, to keep their machinery cool.
      {{ DiscussionBoard.errors[11815969].message }}
    • Profile picture of the author Odahh
      Originally Posted by Mark Singletary View Post

      As far as the thinking that AI is just scraping the web, then why do they need all the extra power and servers compared to a web scraping service such as Google Search? I've read recently that AI needs tons of power, and they were talking about how the current power grid maybe couldn't handle it. It seems that they wouldn't need all that for just scraping, but I'm new to all this, so maybe I'm wrong.

      Mark
      We I can't give a great answer. After watching a few videos on why deepserk can claim to get better performance with some tasks than the massive 1 billion dollar systems.

      Because these big expensive models are trying to achieve agi there is far more energy use to translate something from one language to another than if then if you just built an AI dedicated to translating language.

      With these big expensive systems it opens the door to using them to train a thousand different task specific AI the are far less expensive to build and use far less energy. As the larger models have already used up all the human generated data available and are now using synthetic data to be trained.

      Monetize was correct about much of the energy use going to cooling as the GPUs used have been developed to run graphically demanding video games. Newer chips that need far less cooling are probably already designed but it takes years to build the facilities to mass produce them.

      So by 2028-2030 the power needs will no longer be a huge issue
      {{ DiscussionBoard.errors[11816263].message }}
    • @Mark,

      Originally Posted by Mark Singletary View Post

      [SNIP]As far as the thinking that AI is just scraping the web, then why do they need all the extra power and servers compared to a web scraping service such as Google Search? I've read recently that AI needs tons of power, and they were talking about how the current power grid maybe couldn't handle it. It seems that they wouldn't need all that for just scraping[/SNIP]
      No. You're right. It doesn't just browse the Web to answer queries. That's Google, Bing and other SE's job.

      To browse the Web: ChatGPT models and others, including proprietary and opensource models used in free and premium products, need to be supported with function-calling or hardcoded programmatic workflows to perform autonomous Google searches and content extraction by using external APIs (third parties like SERPAPI and Extractor, or private/proprietary ones).

      Web browsing is a task that can only be completed by a multimodal or text-only LLM if it's given function-calling capabilities and supplied with the functions that perform those autonomous Google searches and data extraction through programmatic workflows and APIs, along with prompts that help it understand the required and optional input parameters as well as expected response outputs, such as particular keys and values in JSON data or Python dictionary format, by each of those supplied functions.

      Anyway, without mentioning anything about probabilistic statistical modelling, error reduction calculus and electronic logic gates -- Let's break down why Large Language Models like GPT-4O consume so much power, and why GPUs are the preferred hardware for running them (at least for now):

      Imagine a Giant Switchboard
      Think of an LLM as a massive switchboard with billions of tiny switches.
      These switches, called "parameters" in the LLM world, control how the model understands and generates text.
      Much more parameters are in multimodal generative models that analyze and produce numeric data matching the format of digital text, audio and images, all in the same space.
      (digital videos are just frames of digital images).

      So when you ask an LLM a question, it's like sending a signal through this switchboard.
      The signal travels through the switches, and their positions determine the answer you get.

      Why So Much Power?

      Billions of Switches: LLMs have billions of these switches. Each one needs to be set correctly for the model to work. This requires a lot of calculations, and each calculation consumes a tiny bit of power. Multiply that by billions, and you have a significant power demand.
      Constant Activity: The switches are constantly being adjusted as the model processes information. This continuous activity requires a steady flow of power.
      Training is Intense: Before an LLM can answer questions, it needs to be "trained" on a massive amount of text data. This is like showing the switchboard countless examples of content use. This training process is incredibly computationally intensive and can take weeks or even months, requiring vast amounts of power.

      Why GPUs?
      Parallel Processing: GPUs are like super-efficient workers that can handle many tasks simultaneously. They're designed to perform the same calculation on many different pieces of data at the same time. This is perfect for LLMs, where the same calculations need to be done across billions of switches.
      Specialized for Math: GPUs are optimized for the type of math that LLMs rely on. They can perform these calculations much faster and more efficiently than CPUs (Central Processing Units), which are the "brains" of most computers.

      Why Not CPUs or Other Hardware?
      CPUs are Generalists: CPUs are good at a wide range of tasks, but they're not specialized for the kind of heavy-duty math that LLMs need. It's like asking a general contractor to build a skyscraper - they can do it, but it'll be much slower and less efficient than a team of specialists.
      Other Hardware: While there are other types of computing hardware, GPUs have proven to be the most effective and efficient for the specific demands of LLMs.

      In Simple Terms
      Imagine you have a vast library with billions of books, and you need to find a specific piece of information.
      An LLM is like having a team of super-fast librarians (GPUs) who can search through all the books simultaneously and quickly find the answer.
      This requires a lot of energy to keep the librarians working and the lights on in the library.

      Key Takeaways
      LLMs are like massive switchboards with billions of switches that need to be constantly adjusted.
      This requires a lot of calculations, which consume significant power.
      GPUs are specialized processors that can handle these calculations much faster and more efficiently than CPUs.
      The training process for LLMs is incredibly computationally intensive and requires vast amounts of power.

      ===
      For Math Nuts (couldn't resist)
      Imagine digital text, audio and images normalized in a matrix of representational numeric arrays across the same space.
      This allows comparative analytics using the same tokens between these formats.

      And this means the output of "training" an LLM (or any Machine Learning or Deep Learning algorithm) is a probabilistic statistical model.
      Basically, it's an algorithm.

      "Model training" is essentially shifting the vectors and indices of these numeric arrays towards the convergence of local and global gradient slopes using error rates calculated and normalized, through loss functions (error minimization algorithms), between 0 and 1 with 4/8/16/32/64-bit precision, from guesses and the correct answers in the answer key (groundtruth data with annotations), but without completely memorizing the correct answers so as to promote near-human generalization (still with errors but minimized, and controllable biases towards where and how much to shift as hardcoded adjuster weights) on unseen data later.
      And, this "training" process consists of multi-layered, multi-step iterations on the same pieces of data in the training set, which should be a good representation of the target real world use case.
      Signature
      Need Custom Programmatic SEO or GenAI Engineering Work Done? Drop Me an Email HERE ...
      • Chief Machine Learning Engineer @ ARIA Research (Sydney, AU)
      • Lead GenAI SEO Campaign Engineer @ Kiteworks, Inc. (SF, US)
      {{ DiscussionBoard.errors[11816270].message }}
  • Profile picture of the author Segmadis
    Basically it needs more electricity because of the nature of the today's AI models.
    See, if Google for example is just a big catalog under the hood, the AI is something different, it is indeed scrapping the net, but then it is processing the information so it can better understand what it is written on any given website. That is going through literally endless loops which are power and time consuming in some cases like hell.
    The other very good explanation is that most of these models are using Python as a backbone, and every programmer out there knows that even if Python is good, easy and adjustable it is also slow as hell, kind of like 200-300x times slower than C for example, or Rust.
    {{ DiscussionBoard.errors[11816073].message }}
  • Profile picture of the author Connor Spencer
    I want to echo what Monetize said - in my experience, it will often "learn" about you if you are logged in...

    The problem I sometimes run into is that the model will start to "hallucinate" - i.e. make things up that sound reasonable but aren't actually true, especially if it seems to be relevant to what I had mentioned earlier.

    I'd recommend keeping an eye on those hallucinations!
    {{ DiscussionBoard.errors[11816213].message }}
  • Profile picture of the author Gil Torres
    I have noticed that some AI tools will remember things I say in past conversations into the future ones. However if I simply tell it to "Clear Memory" it does just that.
    {{ DiscussionBoard.errors[11816440].message }}
  • Profile picture of the author dexcowork
    Whether you're exploring machine learning, neural networks, or automation, hands-on projects and real-world applications make learning more effective. What specific areas of AI are you most interested in?
    {{ DiscussionBoard.errors[11816824].message }}
    • As I said in my opening post, even though I am learning about AI, my post isn't about me learning AI but AI itself learning. For example, it seems that it remembers previous prompts or answers and uses that in new answers. As others have pointed out, some tools have a memory feature. I have learned how to turn that off or leave it on.

      Mark

      Originally Posted by dexcowork View Post

      Whether you're exploring machine learning, neural networks, or automation, hands-on projects and real-world applications make learning more effective. What specific areas of AI are you most interested in?
      {{ DiscussionBoard.errors[11816826].message }}
  • Profile picture of the author brileyknox
    I want to know how can AI help you build up your business am I missing something and or website.
    Signature

    E-Commergy Made Easy
    https://www.sfimg.com/7943020

    {{ DiscussionBoard.errors[11817105].message }}
    • Profile picture of the author Monetize
      Originally Posted by brileyknox View Post

      I want to know how can AI help you build up your business am I missing something and or website.

      A.I. can help you with a number of tasks such as content,
      code, images, business strategy, personal advice, etc.

      You should probably start using it and find out for yourself.
      {{ DiscussionBoard.errors[11817120].message }}
Avatar of Unregistered

Trending Topics