

Reread what I said, calmer this time.
Reread what I said, calmer this time.
That still doesn’t address that the energy use of AI in your statistics includes all AI rather than just image generation.
If we’re including all AI use cases, we’d have to consider all non-AI use cases on the other end too, not just gaming, such as anime production, 3D rendering, etc that also using graphic card cycles.
And still ignoring the very first question.
So, try again.
But we’re not comparing the global energy use of LLMs, diffusion engines, other specialized AI (like protein foldings) etc to ONLY the American gaming market.
The conversation was specifically about image generative AI. You can stop moving the goalposts and building a strawman now, and while at it answer the first question too.
•Ok, I know the researching ability of people has decreased greatly over the years, but using “knowyourmeme” as a source? Really?
• You can now run optimized open source diffusion models on an iPhone, and it’s been possible for years. I use that as an example because yes, there’s models that can easily run on an Nvidia 1060 these days. Those models are more than enough to handle incremental changes to an image in-game
• Already has for awhile as demonstrated by it being able to run on an iPhone, but yes, it’s probably the best way to get an uncanny valley effect in certain paintings in a horror game, as the alternatives would be:
• I’ll call an open source model exploitation the day someone can accurately generate an exact work it was trained on not within 1, but at least within 10 generations. I have looked into this myself, unlike seemingly most people on the internet. Last I checked, the closest was a 90 something % similarity image after using an algorithm that modified the prompt over time after thousands of generations. I can find this research paper myself if you want, but there may be newer research out there.
And if you train an open source model yourself so it can generate content specifically on work you’ve created? Or are you against certain Linux devices too?
Have you ever looked at the file size of something like Stable Diffusion?
Considering the data it’s trained on, do you think it’s;
A) 3 Petabytes B) 500 Terabytes C) 900 Gigabytes D) 100 Gigabytes
Second, what’s the electrical cost of generating a single image using Flux vs 3 minutes of Balder’s Gate, or similar on max settings?
Surely you must have some idea on these numbers and aren’t just parroting things you don’t understand.
Having been near death a couple times, I can say the human body seems to have mechanisms for physical trauma death more than advanced synthetic techniques.
Bleeding out really just gets you very cold pretty quickly, and then very, very sleepy. Also a warm, cozy feeling the closer you get to falling asleep. Drowning is similar, and so is asphyxiation but there’s a bit more panic and random colors first with asphyxiation before the void kicks in. You’d think it’d be the same as drowning but no - maybe with water there’s some primordial memory of the womb that activates?
If you do have a chance at surviving tho, make sure you stay awake.
Same vibes as “if you learned to draw with an iPad then you didn’t actually learn to draw”.
Or in my case, I’m old enough to remember “computer art isn’t real animation/art” and also the criticism assist Photoshop.
And there’s plenty of people who criticized Andy Warhol too before then.
Go back in history and you can read about criticisms of using typewriters over hand writing as well.
“AI” is just very advanced procedural generation. There’s been games that used image diffusion in the past too, just in a far smaller and limited scale (such as a single creature, like the pokemon with the spinning eyes
What if they use it as part of the art tho?
Like a horror game that uses an AI to just slightly tweak an image of the paintings in a haunted building continuously everytime you look past them to look just 1% creepier?
You’re right, my bad. I should have worded that reply better.
I meant it as a tool to help you code etc it’s useful, especially if you know some coding. It can help you to say finish a game by coding mechanics you don’t quite know how to make work which you can then fix up yourself with the desired parameters etc.
If it helps with finishing your idea of a game (especially if it’s something like the first game you’ve ever made), it’s useful in order to learn some of the workflow involved in making a game.