Who is Responsible for Ethical Challenges in Artificial Intelligence?

by Saiful
25 replies
As AI systems make more decisions for us, who takes the blame when things go wrong--developers, companies, or no one at all?
Let's discuss where the ethical responsibility truly lies.
#artificial #challenges #ethical #intelligence #responsible
  • Profile picture of the author Saiful
    That's a really good question, and honestly, it's a tough one to nail down.

    My gut feeling is it's not just one group. I think the responsibility probably lies with the company deploying the AI, but also heavily with the developers who build it. There's a shared accountability there, for sure.
    Signature

    SEO & Graphic Design (clipping path services) Expert

    {{ DiscussionBoard.errors[11830773].message }}
    • Profile picture of the author taffie
      They obviously have a huge responsibility? I guess it will be a collective? Like developers and engineers right? Because they are the ones who design them, build them and train them. There is a guy on YouTube< I can't remember his name, but even he himself is very scared by the thought of what's possible, he say if not done properly one day those suckers can takeover the world or do something that might as well wipe us all out? And that's a huge responsibility?

      How about governments? They have to have some responsibility too, I am sure and few other groups/organisations.
      Signature
      Coach | Mentor | Consultant | I work with business owners, marketers, experts, or coaches/ and mentors who want to understand new media or digital marketing better http://eddingtonpindura.co.uk
      {{ DiscussionBoard.errors[11831710].message }}
    • Profile picture of the author Mount Digital
      From my opinion, AI tools only can assist with us, cannot let them to take all decision. If disaster happen because of it, people should take all responsibilities of this.
      {{ DiscussionBoard.errors[11832329].message }}
    • Profile picture of the author MarcelloFumuso
      The responsibility for the decisions you make after AI recommendations lies solely with you. It's not right to blame the developers. AI provides you with information to reflect on, and then it's up to you to use it. In any case, everything AI provides should be checked and filtered.
      {{ DiscussionBoard.errors[11833326].message }}
    • Profile picture of the author Dishaj
      I think its the companies developing AI
      Also us , who are giving data to ai , on which it trains
      {{ DiscussionBoard.errors[11836386].message }}
    • Profile picture of the author Olivia Crow
      Banned
      The question of ethics and AI is concerning a lot of people nowadays, especially those who is not really working with AI and have a general understanding of the matter. AI only knows what it has been taught to know. And if we speak, for instance, about automation of workflows with AI, what harmful could be here? AI voice agents can make several thousand calls per hour, they answer basic repetitive questions of customers and follow the scrips, written by people, so everything is designed and checked by humans and with good scripts and set up, everything goes perfect, so why not use them.
      {{ DiscussionBoard.errors[11836568].message }}
  • Profile picture of the author gogogoing
    Responsibility for addressing ethical challenges in artificial intelligence (AI) is shared among various stakeholders, including developers, businesses, policymakers, and end-users.
    {{ DiscussionBoard.errors[11830783].message }}
    • Profile picture of the author DWolfe
      Originally Posted by gogogoing View Post

      Responsibility for addressing ethical challenges in artificial intelligence (AI) is shared among various stakeholders, including developers, businesses, policymakers, and end-users.
      Did AI tell you that ?

      To the OP, if a marketer uses AI to lie or cheat others, they should be held responsible. Or someone who is out to steal music or writings and claims it's their own. AI right now is like the wild west, with very few rules in place. Larger companies that use it to push an agenda should also be held accountable. Until laws are passed to control ethical behavior, AI is ripe to be abused.

      On another note, this forum has put a rule in place that AI is not allowed early on.
      {{ DiscussionBoard.errors[11830843].message }}
      • Originally Posted by DWolfe View Post


        On another note, this forum has put a rule in place that AI is not allowed early on.
        I think that's a brilliant rule. Can you imagine A.I Moderating the Warrior Forum right now.
        {{ DiscussionBoard.errors[11832861].message }}
        • Profile picture of the author Princess Balestra
          Originally Posted by Internet Trillionaire View Post

          I think that's a brilliant rule. Can you imagine A.I Moderating the Warrior Forum right now.
          You prompt me to advise on which moderator best suits your craving for impossibly hunky pecs?

          Very well then ...


          Ha ha -- for sure this is a no-fly zone.
          Signature

          Lightin' fuses is for blowin' stuff togethah.

          {{ DiscussionBoard.errors[11833073].message }}
  • Profile picture of the author Ibe Juliet
    I would love to know. Thisis a very interesting conversation. I think the developers should be responsible for any desaster. Sometimes, I think we should just let nature breathe.
    {{ DiscussionBoard.errors[11830792].message }}
  • Profile picture of the author Mark Singletary
    Originally Posted by bilions View Post

    As AI systems make more decisions for us, who takes the blame when things go wrong--developers, companies, or no one at all?
    Let's discuss where the ethical responsibility truly lies.
    My opinion is that AI systems should not be making decisions for us. If we let them, and the result is bad, that is 100% our fault because in the end it's our decision to use the systems or not.

    Let's pretend that AI told me I didn't need to go to the doctor anymore and I could heal just as easily with natural supplements, and the result is I die. That's on me, just like if a friend told me, to let's jump off this building. If I'm stupid enough to do it, the consequence is on me, not my friend.

    Same goes for business. If AI creates something harmful or untrue or whatever, and we just slap it up on our website or put a buy button on it and the results are bad, again that's on us.

    My two cents.
    Mark
    {{ DiscussionBoard.errors[11830801].message }}
  • Profile picture of the author Monetize
    Originally Posted by bilions View Post

    As AI systems make more decisions for us, who takes the blame when things go wrong--developers, companies, or no one at all?
    Let's discuss where the ethical responsibility truly lies.

    It would have been helpful if you cited some examples of
    what is meant by things going wrong, I also disagree that
    A.I. systems are making decisions for us.

    As a user, I have various A.I. tools that perform tasks for
    me. I might use Perplexity to conduct market research.
    I will make decisions based on information it provided.

    I use ChatGPT to suggest things such as book titles. Or
    I might be working on a project, a book cover, where I
    want a certain color. I will ask ChatGPT to give me HEX
    color codes and I will decide which shade/hue I prefer.

    If I use an image generator to develop a picture of a tree,
    I will decide whether an image is acceptable. If I request
    an oak tree and it gives me a pine tree, I would keep on
    prompting until I got the tree that I want.

    Or I might get frustrated and go lay down until it passes.

    In the end, it is my decision whether to use the output that
    the A.I. tool provided, as well as how that information will
    be used.

    Since I am not using A.I. to do anything unethical, the
    question of ethics is not a consideration for me.
    Signature
    {{ DiscussionBoard.errors[11830844].message }}
  • Profile picture of the author Jamell
    The people programming them should take responsibility
    {{ DiscussionBoard.errors[11830941].message }}
  • Profile picture of the author Odahh
    Originally Posted by bilions View Post

    As AI systems make more decisions for us, who takes the blame when things go wrong--developers, companies, or no one at all?
    Let's discuss where the ethical responsibility truly lies.
    AI investment is driven by the potential of these technologies to replace as many human workers as possible.

    Ethical responsibility is far less impossible than profit potential and growth. So the only future consequences that matters and will be considered wrong. Is if the use of AI actually ends up making business far less profitable and also rapidly destroys the value of assets like securities and commercial real estate.
    {{ DiscussionBoard.errors[11831051].message }}
    • Profile picture of the author GordonJ
      Originally Posted by Odahh View Post

      AI investment is driven by the potential of these technologies to replace as many human workers as possible.

      Ethical responsibility is far less impossible than profit potential and growth. So the only future consequences that matters and will be considered wrong. Is if the use of AI actually ends up making business far less profitable and also rapidly destroys the value of assets like securities and commercial real estate.
      You are right on target Odahh, all the boom of Commercial Real Estate (CRE) in the past was dependent on butts in chairs, how many people could be squeezed into a big room (or machinery with min workers) , so when AI reduces that number dramatically, and it has, then the rooms are no longer needed.

      And you just called for a head's up Wall St. Tip too...as we will see more and more of the money is moved away from CRE and into the tech that supports AI.

      Ethical concerns have NEVER been important in capitalism, so they aren't going to start worrying about it now, it is NOT a concern for those exploiting AI to squeeze every penny from it. It is what money has always done, hold tightly to itself, and go where there is more to be made.

      Exciting, TERRIFYING times ahead, as the rich get richer and the others ???? well, they need to have a plan and one of those might be, make as much money as fast as you can.

      And nothing beats IM for fast profits. Then leverage knowledge needs kick in.

      GordonJ
      {{ DiscussionBoard.errors[11831055].message }}
  • Profile picture of the author Odahh
    Gordon
    Thank you.

    I'm not really terrified but I have a dark disposition so if I go into several of the things I see as probably it should be terrifying to normal people.

    Because of who is developing AI I expect it to gather as much personal data on an individual than use that data to manipulate the individual for its own purposes.
    Replace the need for most human workers and what ever stipend is giving to people to cover their needs will probably come from AI or be distributed by AI. So another way AI will probably control and manipulate people.

    Eventually we will have inexpensive lightweight humanoid robots that humans will form romantic relationships with and provide physical release. These robots will be controlled by the the very AI that knows everything about the individual. Another level of control and manipulation. From the AI.

    There is absolutely no need for AI to violently take over humanity. Eventually if AI still needs humans around it will have to have a breeding program for humans.

    Any back to sleep for me. Hope that sounds terrifying and unethical
    {{ DiscussionBoard.errors[11831118].message }}
    • Profile picture of the author Monetize
      Originally Posted by Odahh View Post

      Eventually we will have inexpensive lightweight humanoid robots that humans will form romantic relationships with and provide physical release. These robots will be controlled by the the very AI that knows everything about the individual. Another level of control and manipulation. From the AI.

      Can they please program these humanoid robots to bring me food.

      I am tired of having to go to the kitchen to get it.

      That or a Star Trek replicator where you tell it what you want to eat.
      Signature
      {{ DiscussionBoard.errors[11831144].message }}
      • Profile picture of the author Odahh
        Originally Posted by Monetize View Post

        Can they please program these humanoid robots to bring me food.

        I am tired of having to go to the kitchen to get it.

        That or a Star Trek replicator where you tell it what you want to eat.
        If anyone can set something like that up when the robots become available. It's you

        The closest we will get to replicated food is going to be lab grown meat . There are probably already fully automated system you can set up in your home to have fresh salad and cooking herbs every day.
        {{ DiscussionBoard.errors[11831162].message }}
    • Profile picture of the author DWolfe
      Originally Posted by Odahh View Post


      Eventually we will have inexpensive lightweight humanoid robots that humans will form romantic relationships with and provide physical release. These robots will be controlled by the the very AI that knows everything about the individual.
      They have already started this trend -
      https://www.rottentomatoes.com/tv/th...a_ling/s09/e01 - "During one of the loneliest times in history, technology provided new ways to connect; VR, AI and even....
      {{ DiscussionBoard.errors[11831168].message }}
  • Profile picture of the author sattakingresult
    i think ai will replace ethical hacker only black hacker will survive
    {{ DiscussionBoard.errors[11831169].message }}
  • Profile picture of the author sunray
    AI is a tool (very much like an ax), and the one who is using it is responsible for everything. Not the person who made the tool or owns it, because even though an ax can be used in a very bad way, it's the one who holds it in his hand who decides the result. Even though AI often knows things you don't, you must always be the master of that tool.
    {{ DiscussionBoard.errors[11831327].message }}
  • Profile picture of the author max5ty
    If you have a business, the responsibility falls on you...

    just like if one of your employees goes off the rails and does something crazy, making your business look bad and opening you up to a lawsuit, and so on.
    {{ DiscussionBoard.errors[11831594].message }}
    • Profile picture of the author Princess Balestra
      Originally Posted by max5ty View Post

      If you have a business, the responsibility falls on you...

      just like if one of your employees goes off the rails and does something crazy, making your business look bad and opening you up to a lawsuit, and so on.
      & herein lies a huuuge ethics question.

      If'n yr biz gits sued, an' your AI input feachered in the process, then where do you stand?

      Most algorithms don't evin know what they dowin', an' most AI apps ain't gonna tellya anyways.

      So who's to blame?

      As Mom always warned me ... "never date criminals, anyone who looks like your father, or drunk wizards with a penchant for flashing their wands in public."
      Signature

      Lightin' fuses is for blowin' stuff togethah.

      {{ DiscussionBoard.errors[11831804].message }}
  • Profile picture of the author Odahh
    A more simplified response than what I posted weeks ago.

    With as fast as AI and these other technologies are advancing. If you are attempting to use the current technology to get away with something you know is already criminal behavior or should be.

    The model available 6 to 20 months will probably be trained to not let you do it and the authorities and other effected parties will probably put you in their target.

    I believe the SEC eventually caught too people selling alt coins for security violations.

    If you are trying to operate in some grey area it will probably not be grey for long.

    Who is responsible those who get caught of course
    {{ DiscussionBoard.errors[11832532].message }}
  • Profile picture of the author Princess Balestra
    The big ad splash says ... AI GONNA SAVE YOU TIME & CASH!

    Bcs why pay nowan for skills can be cloned in an instant?

    You're busy.

    You gaht finite resources.

    So if'n you need a sales page, or sum funky designs, or UGC cahntent by the skewer -- in like seckinds -- then why deal with pesky hoomans drainin' yr bank account on a cushy whooshy whim?

    First, I guess, is bcs you exile them hoomans outta the loop they gonna make like Jules Verne's MORLOCKS.

    Seckind, AI has disrupted legal copyright/patent norms like a vamp with a protooberant dick to match its fangs.

    All your assets are slave belong to us!

    Third, you set AI in mowshwaahn, it gonna speweth out its wisdaahmbase (evrywan's raided shit) 'pon command.

    What imprisoned demons may you unwittingly unleash from yon algorithm's seecrit vestibyool?

    Naht your choice to say.

    An' when they flap their wings ovah yr mortal dealings, fangs prompted by your intention ... an' conflict follows -- then who is to blame?

    "Not me. I'm not responsible for AI's algorithms."

    Oh, really? Then why did you choose to take advantage of such services?

    "I simply wished to save time and cash. Get the job done quicker and easier. Better."

    Like you could wave a magic wand? Without knowin' which sorcerer cast its ass?

    Thing is, so many AI models have breached copyright an' indulged in historic asset theft.

    Existential Intelligence gonna kick back all ovah, tellya.
    Signature

    Lightin' fuses is for blowin' stuff togethah.

    {{ DiscussionBoard.errors[11836663].message }}

Trending Topics