Just wanted to prove that political diversity ain’t dead. Remember, don’t downvote for disagreements.

  • jsomae@lemmy.mlOP
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    10 hours ago

    My intuition for a person’s overall moral value is something like the integral of their experiences so far multiplied by their expected future QALYs. This fits my intuition of why it’s okay to kill a zygote, and it’s also not morally abominable to, say, slightly shorten the lifespan of somebody (especially someone already on the brink of death), or to, erm, put someone out of their misery in some cases.

    I’m not terribly moved by single-celled organisms that can “learn.” It’s not hard to find examples of simple things which most people wouldn’t consider “alive,” but “learn.” For instance, a block of metal can “learn” – it responds differently based on past stresses. Or “memory foam.” You could argue that a river “learns,” since it can find its way around obstacles and then double down on that path. Obviously, computers “learn.” Here, we mean “learn” to refer to responding differently based on what’s happened to it over time, rather than the subjective conscious feeling of gaining experience.

    • pebbles@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      9 hours ago

      I was most curious to see answers to this section.

      Is consciousness different from the ability to experience? If they are different what separates them, and why is consciousness the one that gets moral weight? If they are the same then how do you count feelings? Is it measured in real time or felt time? Do psychedelics that slow time make a person more morally valuable in that moment? If it is real time, then why can you disregard felt time?

      I have a few answers I can kinda infer: You likely think consciousness and the ability to experience are the same. You measure those feelings in real time so 1 year is the same for any organism.

      More importantly onto the other axis: Did you mean derivative of their experiences so far? (I assume by time) That would give experience rate. Integral by time would get the total. I think you wanted to end with rate*QALYs = moral value. The big question for me is: how do you personally estimate something’s experience rate?

      Given your previous hierarchy of humans near the top and neurons not making the cut, I assume you belive space has fundamental building blocks that can’t be made smaller. Therefore it is possible to compare the amount of possible interaction in each system.

      Edit: oh yeah, and at the end of all that I still don’t know why brains are different from a steel beam on your moral value equation

      • jsomae@lemmy.mlOP
        link
        fedilink
        arrow-up
        1
        ·
        9 hours ago

        You measure those feelings in real time so 1 year is the same for any organism.

        Well, I said “integral” in the vague gesture that things can have a greater or lesser amount of experience in a given amount of time. I suppose we are looking at different x axes?

        I don’t know how to estimate something’s experience rate, but my intuition is that every creature whose lifespan is at least one year and is visible to the naked eye has about within a factor of an order of magnitude or two the same experience rate. I think children have a greater experience rate than adults because everything is new to them; as a result, someone’s maximal moral value is biased toward the earlier end of their life, like their 20s or 30s.

        I still don’t know why brains are different from a steel beam

        This is all presupposing that consciousness exists at all. If not, then everything’s moral value is 0. If it does, then I feel confident that steel beams don’t have consciousness.

        • pebbles@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 hours ago

          Dang that last one is the most interesting to me. Also sorry for getting anal about the axis. I trust you knew what you were saying.

          This is all presupposing that consciousness exists at all. If not, then everything’s moral value is 0. If it does, then I feel confident that steel beams don’t have consciousness.

          So there is a moral hierarchy but you regard its source as only possibly existing and extremely nebulous. Given that foundation why do you stand by the validity of the hierarchy, and especially why do you say it is moral to do so?

          Also I imagine that your difference in how you see the steel beam vs a brain is based on how much communication you’ve understood from each. Do you think our ability to understand something or someone is a reasonable way to build a moral framework? I think there are many pit falls to that approach personally, but I get its intuitive appeal.

          • jsomae@lemmy.mlOP
            link
            fedilink
            arrow-up
            1
            ·
            9 hours ago

            The reason that I stand by the moral hierarchy despite the possibility that it doesn’t exist at all is that I can only reason about morality under the assumption that consciousness exists. I don’t know how to cause pain to a non-conscious being. To give an analogy: suppose you find out that next year there’s a 50% chance that the earth will be obliterated by some cosmic event – is this a reason to stop caring about global warming? No, because in the event that the earth is spared, we still need to solve global warming.

            It is nebulous, but everything is nebulous at first until we learn more. I’m just trying to separate things that seem like pretty safe bets from things I’m less sure about. Steel beams not having consciousness seems like a safe bet. If it turns out that consciousness exists and works really really weirdly and steel beams do have consciousness, there’s still no particularly good reason to believe that anything I could do to a steel beam matters to it, seeing as it lacks pain receptors.