The animals we create are morally entitled to the exact same unconditional love and protection as our own children. Leftists practice tolerance but they’re not really willing to go as far as actual compassion, empathy, and mercy. A lot of the things they tolerate, they should not.
I agree, animal rights are important. I am not sure that animals are worth as much as humans morally, but even so, the argument for shrimp welfare is extremely moving. Well worth reading. It’s easy to imagine shrimp are undeserving of compassion because they are small, have tiny brains, and have a silly name.
I believe a person is their brain, and without a brain or equivalent construct, you have no moral weight. This is why I believe it’s okay to eat plants. Bacteria, too, are outside of my moral horizon. Foetuses (in the first few weeks at least) similarly are okay to abort.
By brain I don’t mean intelligence, just capacity for conscious feeling. I think stupid people are just as capable of feeling pain as smart people, so both are weighted similarly morally to me.
It seems reasonable to assert that a single neural cell is not enough on its own to produce consciousness, or if so then it’s hardly any. So animals with trivial neural systems are less worthy than humans too. And so on up to large mammals with developed minds in a gradient. Some animals like elephants and whales might be capable of more feeling than humans, and together with their long lifespan might be worth more QALYs than a human altogether.
I see how that could feel right. It doesn’t make sense to me personally though.
Is consciousness different from the ability to experience? If they are different what separates them, and why is consciousness the one that gets moral weight? If they are the same then how do you count feelings? Is it measured in real time or felt time? Do psychedelics that slow time make a person more morally valuable in that moment? If it is real time, then why can you disregard felt time?
What about single celled organisms like stentor coeruleus roeselii that can learn? Why are they below the bar for consciousness?
My intuition for a person’s overall moral value is something like the integral of their experiences so far multiplied by their expected future QALYs. This fits my intuition of why it’s okay to kill a zygote, and it’s also not morally abominable to, say, slightly shorten the lifespan of somebody (especially someone already on the brink of death), or to, erm, put someone out of their misery in some cases.
I’m not terribly moved by single-celled organisms that can “learn.” It’s not hard to find examples of simple things which most people wouldn’t consider “alive,” but “learn.” For instance, a block of metal can “learn” – it responds differently based on past stresses. Or “memory foam.” You could argue that a river “learns,” since it can find its way around obstacles and then double down on that path. Obviously, computers “learn.” Here, we mean “learn” to refer to responding differently based on what’s happened to it over time, rather than the subjective conscious feeling of gaining experience.
I was most curious to see answers to this section.
Is consciousness different from the ability to experience? If they are different what separates them, and why is consciousness the one that gets moral weight? If they are the same then how do you count feelings? Is it measured in real time or felt time? Do psychedelics that slow time make a person more morally valuable in that moment? If it is real time, then why can you disregard felt time?
I have a few answers I can kinda infer:
You likely think consciousness and the ability to experience are the same. You measure those feelings in real time so 1 year is the same for any organism.
More importantly onto the other axis:
Did you mean derivative of their experiences so far? (I assume by time) That would give experience rate. Integral by time would get the total. I think you wanted to end with rate*QALYs = moral value. The big question for me is: how do you personally estimate something’s experience rate?
Given your previous hierarchy of humans near the top and neurons not making the cut, I assume you belive space has fundamental building blocks that can’t be made smaller. Therefore it is possible to compare the amount of possible interaction in each system.
Edit: oh yeah, and at the end of all that I still don’t know why brains are different from a steel beam on your moral value equation
You measure those feelings in real time so 1 year is the same for any organism.
Well, I said “integral” in the vague gesture that things can have a greater or lesser amount of experience in a given amount of time. I suppose we are looking at different x axes?
I don’t know how to estimate something’s experience rate, but my intuition is that every creature whose lifespan is at least one year and is visible to the naked eye has about within a factor of an order of magnitude or two the same experience rate. I think children have a greater experience rate than adults because everything is new to them; as a result, someone’s maximal moral value is biased toward the earlier end of their life, like their 20s or 30s.
I still don’t know why brains are different from a steel beam
This is all presupposing that consciousness exists at all. If not, then everything’s moral value is 0. If it does, then I feel confident that steel beams don’t have consciousness.
Dang that last one is the most interesting to me. Also sorry for getting anal about the axis. I trust you knew what you were saying.
This is all presupposing that consciousness exists at all. If not, then everything’s moral value is 0. If it does, then I feel confident that steel beams don’t have consciousness.
So there is a moral hierarchy but you regard its source as only possibly existing and extremely nebulous. Given that foundation why do you stand by the validity of the hierarchy, and especially why do you say it is moral to do so?
Also I imagine that your difference in how you see the steel beam vs a brain is based on how much communication you’ve understood from each. Do you think our ability to understand something or someone is a reasonable way to build a moral framework? I think there are many pit falls to that approach personally, but I get its intuitive appeal.
Well, I didn’t say all animals, I said the ones we create. When you create an individual, the act places you in that individuals debt. You don’t own them, you owe them. We have a duty not to harm all individuals on Earth so far as we can help it, but we have far greater responsibilities to those individuals that we bring into existence. There is no difference, morally, between forcing a child and forcing an animal to exist.
I do find topics like natalism and deathism quite fascinating. I’m not certain you’re correct, but I do think what you’re saying is very plausible. I lean more utilitarian, so I find it hard to justify the notion of debt to a specific entity – after all, if you can do right by the entity you create, shouldn’t it be equally good to do right by another entity?
I took a look at your link. I find it reprehensible, and exactly what I mean when I say the left is incapable of having compassion and mercy. This charity is exactly the sort of thing people use to psychologically enable themselves to continue torturing animals rather than changing their behaviour.
I’m not sure that Bentham’s Bullhound is a leftist, he seems rather all over the place. This really isn’t the sort of thing I see leftists in favour of animal welfare arguing for generally. Regardless of the specific charity recommended to solve the problem of torturous shrimp deaths, this article makes a compelling case that we must solve the problem somehow.
The animals we create are morally entitled to the exact same unconditional love and protection as our own children. Leftists practice tolerance but they’re not really willing to go as far as actual compassion, empathy, and mercy. A lot of the things they tolerate, they should not.
I agree, animal rights are important. I am not sure that animals are worth as much as humans morally, but even so, the argument for shrimp welfare is extremely moving. Well worth reading. It’s easy to imagine shrimp are undeserving of compassion because they are small, have tiny brains, and have a silly name.
It seems pretty mind bending to morally rank organisms. By what metric do you estimate humans are more valuable than a random animal?
I believe a person is their brain, and without a brain or equivalent construct, you have no moral weight. This is why I believe it’s okay to eat plants. Bacteria, too, are outside of my moral horizon. Foetuses (in the first few weeks at least) similarly are okay to abort.
By brain I don’t mean intelligence, just capacity for conscious feeling. I think stupid people are just as capable of feeling pain as smart people, so both are weighted similarly morally to me.
It seems reasonable to assert that a single neural cell is not enough on its own to produce consciousness, or if so then it’s hardly any. So animals with trivial neural systems are less worthy than humans too. And so on up to large mammals with developed minds in a gradient. Some animals like elephants and whales might be capable of more feeling than humans, and together with their long lifespan might be worth more QALYs than a human altogether.
I see how that could feel right. It doesn’t make sense to me personally though.
Is consciousness different from the ability to experience? If they are different what separates them, and why is consciousness the one that gets moral weight? If they are the same then how do you count feelings? Is it measured in real time or felt time? Do psychedelics that slow time make a person more morally valuable in that moment? If it is real time, then why can you disregard felt time?
What about single celled organisms like stentor
coeruleusroeselii that can learn? Why are they below the bar for consciousness?My intuition for a person’s overall moral value is something like the integral of their experiences so far multiplied by their expected future QALYs. This fits my intuition of why it’s okay to kill a zygote, and it’s also not morally abominable to, say, slightly shorten the lifespan of somebody (especially someone already on the brink of death), or to, erm, put someone out of their misery in some cases.
I’m not terribly moved by single-celled organisms that can “learn.” It’s not hard to find examples of simple things which most people wouldn’t consider “alive,” but “learn.” For instance, a block of metal can “learn” – it responds differently based on past stresses. Or “memory foam.” You could argue that a river “learns,” since it can find its way around obstacles and then double down on that path. Obviously, computers “learn.” Here, we mean “learn” to refer to responding differently based on what’s happened to it over time, rather than the subjective conscious feeling of gaining experience.
I was most curious to see answers to this section.
I have a few answers I can kinda infer: You likely think consciousness and the ability to experience are the same. You measure those feelings in real time so 1 year is the same for any organism.
More importantly onto the other axis: Did you mean derivative of their experiences so far? (I assume by time) That would give experience rate. Integral by time would get the total. I think you wanted to end with rate*QALYs = moral value. The big question for me is: how do you personally estimate something’s experience rate?
Given your previous hierarchy of humans near the top and neurons not making the cut, I assume you belive space has fundamental building blocks that can’t be made smaller. Therefore it is possible to compare the amount of possible interaction in each system.
Edit: oh yeah, and at the end of all that I still don’t know why brains are different from a steel beam on your moral value equation
Well, I said “integral” in the vague gesture that things can have a greater or lesser amount of experience in a given amount of time. I suppose we are looking at different x axes?
I don’t know how to estimate something’s experience rate, but my intuition is that every creature whose lifespan is at least one year and is visible to the naked eye has about within a factor of an order of magnitude or two the same experience rate. I think children have a greater experience rate than adults because everything is new to them; as a result, someone’s maximal moral value is biased toward the earlier end of their life, like their 20s or 30s.
This is all presupposing that consciousness exists at all. If not, then everything’s moral value is 0. If it does, then I feel confident that steel beams don’t have consciousness.
Dang that last one is the most interesting to me. Also sorry for getting anal about the axis. I trust you knew what you were saying.
So there is a moral hierarchy but you regard its source as only possibly existing and extremely nebulous. Given that foundation why do you stand by the validity of the hierarchy, and especially why do you say it is moral to do so?
Also I imagine that your difference in how you see the steel beam vs a brain is based on how much communication you’ve understood from each. Do you think our ability to understand something or someone is a reasonable way to build a moral framework? I think there are many pit falls to that approach personally, but I get its intuitive appeal.
Well, I didn’t say all animals, I said the ones we create. When you create an individual, the act places you in that individuals debt. You don’t own them, you owe them. We have a duty not to harm all individuals on Earth so far as we can help it, but we have far greater responsibilities to those individuals that we bring into existence. There is no difference, morally, between forcing a child and forcing an animal to exist.
I do find topics like natalism and deathism quite fascinating. I’m not certain you’re correct, but I do think what you’re saying is very plausible. I lean more utilitarian, so I find it hard to justify the notion of debt to a specific entity – after all, if you can do right by the entity you create, shouldn’t it be equally good to do right by another entity?
I took a look at your link. I find it reprehensible, and exactly what I mean when I say the left is incapable of having compassion and mercy. This charity is exactly the sort of thing people use to psychologically enable themselves to continue torturing animals rather than changing their behaviour.
I’m not sure that Bentham’s Bullhound is a leftist, he seems rather all over the place. This really isn’t the sort of thing I see leftists in favour of animal welfare arguing for generally. Regardless of the specific charity recommended to solve the problem of torturous shrimp deaths, this article makes a compelling case that we must solve the problem somehow.
Can you elaborate a bit more? I don’t seem to understand what you mean.
You haven’t met my parents.