
Learning AI
If you are logged in and use ChatGPT, Gemini, or Copilot and ask a question such as, "I'm afraid I might have cancer, what are the symptoms to watch for?" Does it "learn" that you might have cancer and adjust future answers around that presumption?
I've asked questions about another topic, not cancer, but many future answers *seem* to all include, to some extent, the topic. The answers aren't wrong, but some of them are a little questionable.
For example, use the above prompt. And then an hour later, you ask "What's the best way to get a tan in winter" and it includes skin cancer in the answers. A couple of days later, you ask about your bones hurting, and of course that could be because of bone cancer, along with many other things. Then you ask about a headache, but it doesn't mention cancer. But later you ask about what does WBC blood test mean, and it mentions a form of cancer causing low WBC. On and on.
All of those answers could include cancer. My question is are the answers it gives based on my original prompt or no.
Thanks.
Mark
⢠Chief Machine Learning Engineer @ ARIA Research (Sydney, AU)
⢠Lead GenAI SEO Campaign Engineer @ Kiteworks, Inc. (SF, US)
E-Commergy Made Easy
https://www.sfimg.com/7943020