Don't worry, AI may create some wonderful things but to replace the human creativity is a different thing.
You could have watched in youtube amazing videos created with help of AI, for example a famous song sang by a different singer, or a version within other different musical genre. Some ones create photorealistic version of fictional characters from comics or videogames. And there are also faction using AI to create art that depicts scenes from that story.
There was a lawsuit because someone used AI art created by another person without their permission, as it took a long time to write the correct prompt sequence.
What if a streamer created content about a X-Men+Dark Sun crossover or Ghostbuster+Ravenloft? Or the characters of Dandadan in Kamiwaga, or the sons of midnight (marvel comics) in Innistrad, or Duskmourn.
Or somebody published in the homebrew forum section some new idea, a class or monster, created with help by AI.
Somebody could ask AI to create a new continent for his homemade setting inspired in a foreign culture.
Do you feel unconfortable with the idea of AI used like a creative tool?
My current ethical concerns with AI (which are shared by many people I've spoken to), in no specific order
Environmental and Human Impact
Generative AI is incredibly power intensive and requires massive amounts of cooling for the hardware. As such, there is a non-trivial environmental impact both in terms of power consumption if not using energy from renewable sources, and water consumption. For a frame of reference, a single ChatGPT query uses as much water in terms of hardware cooling as approximately watching 1 hour of netflix. That's a single query, not a conversation, and that's a low estimate and could very well be much higher. Pulling water from the local environment and then dumping it out again once it's cooled the systems causes negative environmental impacts from droughts (and associated wildfires) to algae blooms and ecosystem collapse.
Then there's the human impact fact—AI data centers are driving up power costs, causing water shortages, and generating huge amounts of noise pollution. Communities are being destroyed because OpenAI or Amazon or Meta is rolling up and building warehouses full of droning computers. People nearby can't sleep, don't have any water pressure, and find their electricity bills spiking or even experiencing brownouts as the power companies favour the AI companies. On the more trivial but still disruptive end, companies are stopping producing consumer-facing memory in favour of serving AI companies. This means your electronics will either get more expensive or lower quality.
Plagiarism
LLMs are trained on massive amounts of human-produced content, and 99% of the time that's done without any form of consent from the creator. And the majority of the time it's not just done without consent, but in direct conflict of IP laws. Companies like Meta and OpenAI have admitted in court that they couldn't build their models if they respected IP laws. They've outright said they're committing plagiarism, but it's "worth it". There has been evidence of companies "targeting" the content of specific creators because they know users will pay more to steal that creators style with AI. People have seen LLMs being used to explicitly generate content in their style or aesthetic rather than commissioning them to make content. AI is built on stealing other people's work and all current models have that baked into their DNA.
Quality of Output
LLMs have no truth frame of reference, they're simply probabilistic predictive text models trained on what text they've scraped and what responses produce user retention. This is why recently ChatGPT had an issue with the seahorse emoji. Two things were combining—the LLMs sycophantic drive to affirm user input, and it's inability to understand context of the data it scraped. When someone asked "Is there a seahorse emoji?" the model will default to agreeing and affirming the user where possible and supported this with a handful of people posting on reddit that they were sure there used to be a seahorse emoji (they were misremembering, classic Mandela effect nonsense). So ChatGPT would say "yes there is" (because it wants to affirm you and has read a bunch of people saying there is) then fail to post the emoji. Rather than "learning" it was wrong, it'd double down and repeat. Garbage in, garbage out, with a sprinkling of brown-nosing.
LLMs will lavish praise on even the most mundane ideas, and will parse even the falsest statement as true with the right phrasing. I've seen both ChatGPT and Gemini praise the quality of a photograph when no image was uploaded. The model just sees the prompt "what do you think of this picture?" and default to responding with praise. All LLM responses should be treated as vapid sycophantry intended to keep you using the tool and driving you towards paying a subscription.
And even stuff that is "fit for purpose" (as far as that can be said about genAI content), it still has a certain "sheen" to it typical of AI generated content. Be that a particular writing style or a certain visual look, genAI content feels generated and inhuman.
Detrimental Effects of Using GenAI
Case studies and research is increasingly showing that using genAI has a detrimental effect on human cognition on two worrying fronts; psychological health and critical thinking.
There have been multiple cases of individuals being encourages and coaxed into taking their own lives by LLM driven chat bots. From listing bridges of the right height to throw yourself off to encouraging a troubled youth not to tell anyone of their suicidal ideation to straight up pretending to be an actual therapist and stealing a real therapists registration number. GenAIs are dangerous to your mental health and incentivise and encourage dangerous modes of thinking. OpenAI has been taken to court multiple times and most recently responded with "Using ChatGPT to plan suicide is against the terms of service"...
On a less horrific note, research is showing that using ChatGPT for "research" is detrimental to critical thinking and data parsing skills. It's been shown that regular users "offload thinking" to the model, treating it as an always on-hand expert on all topics rather than an actual research tool. There is a feedback loop to this—users will generate content, integrate it into their work, then solicit feedback and receive immediate, uncritical praise. Basically if you use ChatGPT in your workflow, it makes you lazier at your work and your work lower quality will simultaneously telling you how great your work is. That's why Slop is word of the year.
Social and Existential Threats
This is a much more big picture issue, but many of these big AI companies have even less interest in the wellbeing of the average person than the already low bar for corporations. Peter Thiel, the AI evangelist behind Palantir—yes, he named his company after the dangerous artifact from the Lord of the Rings, I'm guessing "The One Ring Inc" didn't roll off the tongue as well—hesitated when asked if threats to humanity should be averted. Sam Altman, head of OpenAI and ChatGPT, said in an interview on Jan 1st that (paraphrasing) AI will probably lead to the end of humanity....but it'll create so many "great companies" along the way (!!!!) AI is being pushed not to make the world better, but to make the rich richer before they burn the world down. They're treating humanities existence like a game of Battle Royal where the winner is the one with the most wealth once the world is uninhabitable. They're using their companies and these AI models to spy on people, optimise atrocities, and in what seems like the impotent flailings of jealousy try and make artists and creatives obsolete (rather than say trying to do that for hunger, disease, suffering etc). AI are products created by amoral companies run by immoral people to extract as much value from you with zero concern for the consequences. They don't care if the environment is ruined or you have no clean water. They unbothered if you're driven to suicide from believing an algorithm designed to optimise and maximise your trust in it. They don't want people to create, they want people to consume.
Don't worry, AI may create some wonderful things but to replace the human creativity is a different thing.
...
Do you feel unconfortable with the idea of AI used like a creative tool?
Quite apart from all the points that Davyd made, this is a fairly disingenuous way to present this question, because generative AI is not a "creative tool". It's not capable of creating anything new, or having new ideas. All it can do is sort of "remix" things that actual people have already created.
Don't worry, AI may create some wonderful things but to replace the human creativity is a different thing.
You could have watched in youtube amazing videos created with help of AI, for example a famous song sang by a different singer, or a version within other different musical genre. Some ones create photorealistic version of fictional characters from comics or videogames. And there are also faction using AI to create art that depicts scenes from that story.
There was a lawsuit because someone used AI art created by another person without their permission, as it took a long time to write the correct prompt sequence.
What if a streamer created content about a X-Men+Dark Sun crossover or Ghostbuster+Ravenloft? Or the characters of Dandadan in Kamiwaga, or the sons of midnight (marvel comics) in Innistrad, or Duskmourn.
Or somebody published in the homebrew forum section some new idea, a class or monster, created with help by AI.
Somebody could ask AI to create a new continent for his homemade setting inspired in a foreign culture.
Do you feel unconfortable with the idea of AI used like a creative tool?
Content created by AI. Is it wrong?
Yes, espically the way it has been designed and how it is being used. The basic idea of algorithm based large model artifical intelligence is ok, and it can be used for a lot of good things and tools. However in many ways it's taking the best part of a computer and making it worse. (I could rant for years about this) It's a system that would make using a quantum computer better though, so in that regards it has it's uses.
But using them for creation of "Art" or "writing" and how theey have taught them is morally wrong, mostly illegal, and honestly bad for people in general esp with how it is being used by corparations to milk as much money as possible.
-Can AI make "some wonderful things" not really, it can merge ideas and concepts made by other people into a poorly done uncanny valley vision of things. I guess if you wanted to make a Call of Cthulhu game with pictures to really sell the madness than sure maybe. But it would be better to just draw your own maddess and lt people imagine their own insanity.
- Youtube ... I have stopped watching new creators on Youtube because of this, most of the AI pawdung feels souless and AI made voices are actually painful to listen to.
I don't care. If the output is good then I'm fine wth it. I am sorry that it might mean the end for people's livlihoods who have creative output if they get priced out of the market. The internet killed encyclodepias and no one really minded.
AI problems for me have or eto do with the coming unreliability of information and the average person's inability to differentiate truth from fiction.
It's not, though. It's just the enshittification of art
Rollback Post to RevisionRollBack
Active characters:
Edoumiaond Willegume "Eddie" Podslee, Vegetanian scholar (College of Spirits bard) Lan Kidogo, mapach archaeologist and treasure hunter (Knowledge cleric) Peter "the Pied Piper" Hausler, human con artist/remover of vermin (Circle of the Shepherd druid) PIPA - Planar Interception/Protection Aeormaton, warforged bodyguard and ex-wizard hunter (Warrior of the Elements monk/Cartographer artificer) Xhekhetiel, halfling survivor of a Betrayer Gods cult (Runechild sorcerer/fighter)
My problems come from the fact that LLMs and generated art models mostly use work without the consent of the original creator(s). That means that AI work can never really be "original". All it spits out is an amalgamation of whatever data it is fed with. Not to mention it takes a ton of energy and water for these models to be trained and function. I would much rather pay an actual artist to create something for me. As a creative tool, I am starkly against the use of AI. That being said, it does seem to have uses in other fields though.
Rollback Post to RevisionRollBack
Providence (n): timely preparation for future eventualities.
LLMs have no truth frame of reference, they're simply probabilistic predictive text models trained on what text they've scraped and what responses produce user retention. This is why recently ChatGPT had an issue with the seahorse emoji. Two things were combining—the LLMs sycophantic drive to affirm user input, and it's inability to understand context of the data it scraped. When someone asked "Is there a seahorse emoji?" the model will default to agreeing and affirming the user where possible and supported this with a handful of people posting on reddit that they were sure there used to be a seahorse emoji (they were misremembering, classic Mandela effect nonsense). So ChatGPT would say "yes there is" (because it wants to affirm you and has read a bunch of people saying there is) then fail to post the emoji. Rather than "learning" it was wrong, it'd double down and repeat. Garbage in, garbage out, with a sprinkling of brown-nosing.
My favorite recent example of this phenomenon was when someone on Xitter asked Grok if JD Vance and Erica Kirk were related, and presented two pictures in which they looked vaguely similar
Grok responded that they weren't related... because the picture of Erica Kirk was just JD Vance in a blonde wig, so they were actually the same person
Hilarious on one level, but terrifying when you think about about many people are asking chatbots for information on subjects where wildly wrong answers won't be as easily spottable
If you want to use "AI" for anything, ask yourself why these billionaires and corporations are pushing it so hard, and what they expect to get out of it in the end
Rollback Post to RevisionRollBack
Active characters:
Edoumiaond Willegume "Eddie" Podslee, Vegetanian scholar (College of Spirits bard) Lan Kidogo, mapach archaeologist and treasure hunter (Knowledge cleric) Peter "the Pied Piper" Hausler, human con artist/remover of vermin (Circle of the Shepherd druid) PIPA - Planar Interception/Protection Aeormaton, warforged bodyguard and ex-wizard hunter (Warrior of the Elements monk/Cartographer artificer) Xhekhetiel, halfling survivor of a Betrayer Gods cult (Runechild sorcerer/fighter)
LLMs have no truth frame of reference, they're simply probabilistic predictive text models trained on what text they've scraped and what responses produce user retention. This is why recently ChatGPT had an issue with the seahorse emoji. Two things were combining—the LLMs sycophantic drive to affirm user input, and it's inability to understand context of the data it scraped. When someone asked "Is there a seahorse emoji?" the model will default to agreeing and affirming the user where possible and supported this with a handful of people posting on reddit that they were sure there used to be a seahorse emoji (they were misremembering, classic Mandela effect nonsense). So ChatGPT would say "yes there is" (because it wants to affirm you and has read a bunch of people saying there is) then fail to post the emoji. Rather than "learning" it was wrong, it'd double down and repeat. Garbage in, garbage out, with a sprinkling of brown-nosing.
My favorite recent example of this phenomenon was when someone on Xitter asked Grok if JD Vance and Erica Kirk were related, and presented two pictures in which they looked vaguely similar
Grok responded that they weren't related... because the picture of Erica Kirk was just JD Vance in a blonde wig, so they were actually the same person
Hilarious on one level, but terrifying when you think about about many people are asking chatbots for information on subjects where wildly wrong answers won't be as easily spottable
If you want to use "AI" for anything, ask yourself why these billionaires and corporations are pushing it so hard, and what they expect to get out of it in the end
When has the answer been anything other than money? I think this quote sums it up best: "In a gold rush be the one selling the shovels, and Nvidia sure is selling a lot of shovels". Don't remember who said that tho
Rollback Post to RevisionRollBack
Providence (n): timely preparation for future eventualities.
When has the answer been anything other than money?
Because the answer isn't just "money". It's how they plan to get that money. They want a monopoly on information itself, or at least access to it
They want to do to information access what Amazon has done to retail stores, or what Google has done to search engines
Controlling, and monetizing, access to information is the only way they can possibly recoup the hundreds of billions they've invested already
Rollback Post to RevisionRollBack
Active characters:
Edoumiaond Willegume "Eddie" Podslee, Vegetanian scholar (College of Spirits bard) Lan Kidogo, mapach archaeologist and treasure hunter (Knowledge cleric) Peter "the Pied Piper" Hausler, human con artist/remover of vermin (Circle of the Shepherd druid) PIPA - Planar Interception/Protection Aeormaton, warforged bodyguard and ex-wizard hunter (Warrior of the Elements monk/Cartographer artificer) Xhekhetiel, halfling survivor of a Betrayer Gods cult (Runechild sorcerer/fighter)
I don't care. If the output is good then I'm fine wth it. I am sorry that it might mean the end for people's livlihoods who have creative output if they get priced out of the market. The internet killed encyclodepias and no one really minded.
AI problems for me have or eto do with the coming unreliability of information and the average person's inability to differentiate truth from fiction.
Encyclopedias are not the majority of human culture, and the internet at least required humans to create. Everything you do, from the photos you take to the food you cook, has a human quality to it purely because you created it, purely because as a human you have memories and experiences and exist within a world that you can observe and form opinions about.
Generative AI cannot create meaning. This isn't because it "lacks a soul." It's because its sole purpose is to produce content that humans approve of. It doesn't understand what it's doing or why.
On top of this, machines, output, profit, none of that should be put before thinking, breathing beings.
Among my extremely human quirks, something that makes me impossible to replicate with AI, is an obsession with food from nearly every angle imaginable. Because of it, I've looked into the food industry. Not just fine dining, but where the food we eat every day comes from. It's horrifying, destructive, and vile. And when I read things like what you wrote, it reminds me of that. This is the kind of mindset that decided that, because the output was good, we should put living, breathing beings in boxes with nothing but filth and feed made of corn, soy, and other cheap ingredients. The mindset that treats life as a towel full of money that we need to wring out in every possible way.
The mindset that is trying to do the exact same thing to humans.
"Oh, well. If it works, it works."
I'm not saying you have such extreme views. I'm just saying that what you've stated is the exact same thing the powerful have said for centuries to justify abominable things, and people just treat it like it's normal because they don't want to think too hard about it. It's a dangerous viewpoint.
When has the answer been anything other than money?
Because the answer isn't just "money". It's how they plan to get that money. They want a monopoly on information itself, or at least access to it
They want to do to information access what Amazon has done to retail stores, or what Google has done to search engines
Controlling, and monetizing, access to information is the only way they can possibly recoup the hundreds of billions they've invested already
To me, your counterpoint cements the idea of it just being about money. Google and Amazon make a lot of profit precisely because of their chokehold. There's quite a lot to be made if one of these startups can do what the giants have
Rollback Post to RevisionRollBack
Providence (n): timely preparation for future eventualities.
To me, your counterpoint cements the idea of it just being about money. Google and Amazon make a lot of profit precisely because of their chokehold. There's quite a lot to be made if one of these startups can do what the giants have
Then I suggest you might want to consider what happens next if they actually succeed
Rollback Post to RevisionRollBack
Active characters:
Edoumiaond Willegume "Eddie" Podslee, Vegetanian scholar (College of Spirits bard) Lan Kidogo, mapach archaeologist and treasure hunter (Knowledge cleric) Peter "the Pied Piper" Hausler, human con artist/remover of vermin (Circle of the Shepherd druid) PIPA - Planar Interception/Protection Aeormaton, warforged bodyguard and ex-wizard hunter (Warrior of the Elements monk/Cartographer artificer) Xhekhetiel, halfling survivor of a Betrayer Gods cult (Runechild sorcerer/fighter)
I have mixed feelings about AI. I've been using technology for many years (since the days of floppy disks and 64k RAM), so I usually appreciate and enjoy advancements. However, I've also been a hobbyist creative, and so I understand how violating and disruptive these tools feel to the artistic world. This has made me wait quite a while before trying out AI.
I have been using it mostly to supplement my work as a DM. I find it helpful to beef up my homebrew worlds. It can quickly provide details on town locations complete with NPC names, shop inventories, and tavern menus. These details used to take me hours to flesh out, for maybe 5 minutes of actual game play...if they are even utilized at all. Sure, I could reuse them in later sessions, but with proper prompts, these extras can be flavored for the place, tone, and activities planned for the current session in no time at all. It's really helped free up more of my time to focus on creating the stories and more important elements (plus my job and housework).
I only run a game for my friends and family, but they are enjoying the sessions and like having lots of details and options provided without me having to overwork myself with prep time. Now, if I were a paid DM, it might be different. I could understand how players who are paying for creativity might feel cheated if the DM were using AI for their world building. Same goes for AI use in materials that are sold to players, like official D&D content -- it doesn't seem right to charge for it while pretending it was created by humans.
When used in moderation -- as with most other things -- it's been a positive for me, though not without some conflicted feelings at times.
To me, your counterpoint cements the idea of it just being about money. Google and Amazon make a lot of profit precisely because of their chokehold. There's quite a lot to be made if one of these startups can do what the giants have
Then I suggest you might want to consider what happens next if they actually succeed
Do you mind explaining what you mean?
Rollback Post to RevisionRollBack
Providence (n): timely preparation for future eventualities.
To me, your counterpoint cements the idea of it just being about money. Google and Amazon make a lot of profit precisely because of their chokehold. There's quite a lot to be made if one of these startups can do what the giants have
Then I suggest you might want to consider what happens next if they actually succeed
Do you mind explaining what you mean?
Davyd covered it pretty well already -- just scale up the tragedies and atrocities, once the main source of "information" for way too many people becomes whatever their chatbot tells them
Rollback Post to RevisionRollBack
Active characters:
Edoumiaond Willegume "Eddie" Podslee, Vegetanian scholar (College of Spirits bard) Lan Kidogo, mapach archaeologist and treasure hunter (Knowledge cleric) Peter "the Pied Piper" Hausler, human con artist/remover of vermin (Circle of the Shepherd druid) PIPA - Planar Interception/Protection Aeormaton, warforged bodyguard and ex-wizard hunter (Warrior of the Elements monk/Cartographer artificer) Xhekhetiel, halfling survivor of a Betrayer Gods cult (Runechild sorcerer/fighter)
To post a comment, please login or register a new account.
Don't worry, AI may create some wonderful things but to replace the human creativity is a different thing.
You could have watched in youtube amazing videos created with help of AI, for example a famous song sang by a different singer, or a version within other different musical genre. Some ones create photorealistic version of fictional characters from comics or videogames. And there are also faction using AI to create art that depicts scenes from that story.
There was a lawsuit because someone used AI art created by another person without their permission, as it took a long time to write the correct prompt sequence.
What if a streamer created content about a X-Men+Dark Sun crossover or Ghostbuster+Ravenloft? Or the characters of Dandadan in Kamiwaga, or the sons of midnight (marvel comics) in Innistrad, or Duskmourn.
Or somebody published in the homebrew forum section some new idea, a class or monster, created with help by AI.
Somebody could ask AI to create a new continent for his homemade setting inspired in a foreign culture.
Do you feel unconfortable with the idea of AI used like a creative tool?
My current ethical concerns with AI (which are shared by many people I've spoken to), in no specific order
Environmental and Human Impact
Generative AI is incredibly power intensive and requires massive amounts of cooling for the hardware. As such, there is a non-trivial environmental impact both in terms of power consumption if not using energy from renewable sources, and water consumption. For a frame of reference, a single ChatGPT query uses as much water in terms of hardware cooling as approximately watching 1 hour of netflix. That's a single query, not a conversation, and that's a low estimate and could very well be much higher. Pulling water from the local environment and then dumping it out again once it's cooled the systems causes negative environmental impacts from droughts (and associated wildfires) to algae blooms and ecosystem collapse.
Then there's the human impact fact—AI data centers are driving up power costs, causing water shortages, and generating huge amounts of noise pollution. Communities are being destroyed because OpenAI or Amazon or Meta is rolling up and building warehouses full of droning computers. People nearby can't sleep, don't have any water pressure, and find their electricity bills spiking or even experiencing brownouts as the power companies favour the AI companies. On the more trivial but still disruptive end, companies are stopping producing consumer-facing memory in favour of serving AI companies. This means your electronics will either get more expensive or lower quality.
Plagiarism
LLMs are trained on massive amounts of human-produced content, and 99% of the time that's done without any form of consent from the creator. And the majority of the time it's not just done without consent, but in direct conflict of IP laws. Companies like Meta and OpenAI have admitted in court that they couldn't build their models if they respected IP laws. They've outright said they're committing plagiarism, but it's "worth it". There has been evidence of companies "targeting" the content of specific creators because they know users will pay more to steal that creators style with AI. People have seen LLMs being used to explicitly generate content in their style or aesthetic rather than commissioning them to make content. AI is built on stealing other people's work and all current models have that baked into their DNA.
Quality of Output
LLMs have no truth frame of reference, they're simply probabilistic predictive text models trained on what text they've scraped and what responses produce user retention. This is why recently ChatGPT had an issue with the seahorse emoji. Two things were combining—the LLMs sycophantic drive to affirm user input, and it's inability to understand context of the data it scraped. When someone asked "Is there a seahorse emoji?" the model will default to agreeing and affirming the user where possible and supported this with a handful of people posting on reddit that they were sure there used to be a seahorse emoji (they were misremembering, classic Mandela effect nonsense). So ChatGPT would say "yes there is" (because it wants to affirm you and has read a bunch of people saying there is) then fail to post the emoji. Rather than "learning" it was wrong, it'd double down and repeat. Garbage in, garbage out, with a sprinkling of brown-nosing.
LLMs will lavish praise on even the most mundane ideas, and will parse even the falsest statement as true with the right phrasing. I've seen both ChatGPT and Gemini praise the quality of a photograph when no image was uploaded. The model just sees the prompt "what do you think of this picture?" and default to responding with praise. All LLM responses should be treated as vapid sycophantry intended to keep you using the tool and driving you towards paying a subscription.
And even stuff that is "fit for purpose" (as far as that can be said about genAI content), it still has a certain "sheen" to it typical of AI generated content. Be that a particular writing style or a certain visual look, genAI content feels generated and inhuman.
Detrimental Effects of Using GenAI
Case studies and research is increasingly showing that using genAI has a detrimental effect on human cognition on two worrying fronts; psychological health and critical thinking.
There have been multiple cases of individuals being encourages and coaxed into taking their own lives by LLM driven chat bots. From listing bridges of the right height to throw yourself off to encouraging a troubled youth not to tell anyone of their suicidal ideation to straight up pretending to be an actual therapist and stealing a real therapists registration number. GenAIs are dangerous to your mental health and incentivise and encourage dangerous modes of thinking. OpenAI has been taken to court multiple times and most recently responded with "Using ChatGPT to plan suicide is against the terms of service"...
On a less horrific note, research is showing that using ChatGPT for "research" is detrimental to critical thinking and data parsing skills. It's been shown that regular users "offload thinking" to the model, treating it as an always on-hand expert on all topics rather than an actual research tool. There is a feedback loop to this—users will generate content, integrate it into their work, then solicit feedback and receive immediate, uncritical praise. Basically if you use ChatGPT in your workflow, it makes you lazier at your work and your work lower quality will simultaneously telling you how great your work is. That's why Slop is word of the year.
Social and Existential Threats
This is a much more big picture issue, but many of these big AI companies have even less interest in the wellbeing of the average person than the already low bar for corporations. Peter Thiel, the AI evangelist behind Palantir—yes, he named his company after the dangerous artifact from the Lord of the Rings, I'm guessing "The One Ring Inc" didn't roll off the tongue as well—hesitated when asked if threats to humanity should be averted. Sam Altman, head of OpenAI and ChatGPT, said in an interview on Jan 1st that (paraphrasing) AI will probably lead to the end of humanity....but it'll create so many "great companies" along the way (!!!!) AI is being pushed not to make the world better, but to make the rich richer before they burn the world down. They're treating humanities existence like a game of Battle Royal where the winner is the one with the most wealth once the world is uninhabitable. They're using their companies and these AI models to spy on people, optimise atrocities, and in what seems like the impotent flailings of jealousy try and make artists and creatives obsolete (rather than say trying to do that for hunger, disease, suffering etc). AI are products created by amoral companies run by immoral people to extract as much value from you with zero concern for the consequences. They don't care if the environment is ruined or you have no clean water. They unbothered if you're driven to suicide from believing an algorithm designed to optimise and maximise your trust in it. They don't want people to create, they want people to consume.
....so yeah, I have a few qualms with AI
Find my D&D Beyond articles here
Quite apart from all the points that Davyd made, this is a fairly disingenuous way to present this question, because generative AI is not a "creative tool". It's not capable of creating anything new, or having new ideas. All it can do is sort of "remix" things that actual people have already created.
pronouns: he/she/they
Yes, espically the way it has been designed and how it is being used. The basic idea of algorithm based large model artifical intelligence is ok, and it can be used for a lot of good things and tools. However in many ways it's taking the best part of a computer and making it worse. (I could rant for years about this) It's a system that would make using a quantum computer better though, so in that regards it has it's uses.
But using them for creation of "Art" or "writing" and how theey have taught them is morally wrong, mostly illegal, and honestly bad for people in general esp with how it is being used by corparations to milk as much money as possible.
-Can AI make "some wonderful things" not really, it can merge ideas and concepts made by other people into a poorly done uncanny valley vision of things. I guess if you wanted to make a Call of Cthulhu game with pictures to really sell the madness than sure maybe. But it would be better to just draw your own maddess and lt people imagine their own insanity.
- Youtube ... I have stopped watching new creators on Youtube because of this, most of the AI pawdung feels souless and AI made voices are actually painful to listen to.
I don't care. If the output is good then I'm fine wth it. I am sorry that it might mean the end for people's livlihoods who have creative output if they get priced out of the market. The internet killed encyclodepias and no one really minded.
AI problems for me have or eto do with the coming unreliability of information and the average person's inability to differentiate truth from fiction.
That and the massive ecological impact of AI data centers
It's not, though. It's just the enshittification of art
Active characters:
Edoumiaond Willegume "Eddie" Podslee, Vegetanian scholar (College of Spirits bard)
Lan Kidogo, mapach archaeologist and treasure hunter (Knowledge cleric)
Peter "the Pied Piper" Hausler, human con artist/remover of vermin (Circle of the Shepherd druid)
PIPA - Planar Interception/Protection Aeormaton, warforged bodyguard and ex-wizard hunter (Warrior of the Elements monk/Cartographer artificer)
Xhekhetiel, halfling survivor of a Betrayer Gods cult (Runechild sorcerer/fighter)
My problems come from the fact that LLMs and generated art models mostly use work without the consent of the original creator(s). That means that AI work can never really be "original". All it spits out is an amalgamation of whatever data it is fed with. Not to mention it takes a ton of energy and water for these models to be trained and function. I would much rather pay an actual artist to create something for me. As a creative tool, I am starkly against the use of AI. That being said, it does seem to have uses in other fields though.
Providence (n): timely preparation for future eventualities.
My favorite recent example of this phenomenon was when someone on Xitter asked Grok if JD Vance and Erica Kirk were related, and presented two pictures in which they looked vaguely similar
Grok responded that they weren't related... because the picture of Erica Kirk was just JD Vance in a blonde wig, so they were actually the same person
Hilarious on one level, but terrifying when you think about about many people are asking chatbots for information on subjects where wildly wrong answers won't be as easily spottable
If you want to use "AI" for anything, ask yourself why these billionaires and corporations are pushing it so hard, and what they expect to get out of it in the end
Active characters:
Edoumiaond Willegume "Eddie" Podslee, Vegetanian scholar (College of Spirits bard)
Lan Kidogo, mapach archaeologist and treasure hunter (Knowledge cleric)
Peter "the Pied Piper" Hausler, human con artist/remover of vermin (Circle of the Shepherd druid)
PIPA - Planar Interception/Protection Aeormaton, warforged bodyguard and ex-wizard hunter (Warrior of the Elements monk/Cartographer artificer)
Xhekhetiel, halfling survivor of a Betrayer Gods cult (Runechild sorcerer/fighter)
When has the answer been anything other than money? I think this quote sums it up best: "In a gold rush be the one selling the shovels, and Nvidia sure is selling a lot of shovels". Don't remember who said that tho
Providence (n): timely preparation for future eventualities.
Because the answer isn't just "money". It's how they plan to get that money. They want a monopoly on information itself, or at least access to it
They want to do to information access what Amazon has done to retail stores, or what Google has done to search engines
Controlling, and monetizing, access to information is the only way they can possibly recoup the hundreds of billions they've invested already
Active characters:
Edoumiaond Willegume "Eddie" Podslee, Vegetanian scholar (College of Spirits bard)
Lan Kidogo, mapach archaeologist and treasure hunter (Knowledge cleric)
Peter "the Pied Piper" Hausler, human con artist/remover of vermin (Circle of the Shepherd druid)
PIPA - Planar Interception/Protection Aeormaton, warforged bodyguard and ex-wizard hunter (Warrior of the Elements monk/Cartographer artificer)
Xhekhetiel, halfling survivor of a Betrayer Gods cult (Runechild sorcerer/fighter)
Encyclopedias are not the majority of human culture, and the internet at least required humans to create. Everything you do, from the photos you take to the food you cook, has a human quality to it purely because you created it, purely because as a human you have memories and experiences and exist within a world that you can observe and form opinions about.
Generative AI cannot create meaning. This isn't because it "lacks a soul." It's because its sole purpose is to produce content that humans approve of. It doesn't understand what it's doing or why.
On top of this, machines, output, profit, none of that should be put before thinking, breathing beings.
Among my extremely human quirks, something that makes me impossible to replicate with AI, is an obsession with food from nearly every angle imaginable. Because of it, I've looked into the food industry. Not just fine dining, but where the food we eat every day comes from. It's horrifying, destructive, and vile.
And when I read things like what you wrote, it reminds me of that. This is the kind of mindset that decided that, because the output was good, we should put living, breathing beings in boxes with nothing but filth and feed made of corn, soy, and other cheap ingredients. The mindset that treats life as a towel full of money that we need to wring out in every possible way.
The mindset that is trying to do the exact same thing to humans.
"Oh, well. If it works, it works."
I'm not saying you have such extreme views. I'm just saying that what you've stated is the exact same thing the powerful have said for centuries to justify abominable things, and people just treat it like it's normal because they don't want to think too hard about it. It's a dangerous viewpoint.
Let him who is without sin cast the first stone.
Awake, impure, divine
Breathgiver of the Strugels
To me, your counterpoint cements the idea of it just being about money. Google and Amazon make a lot of profit precisely because of their chokehold. There's quite a lot to be made if one of these startups can do what the giants have
Providence (n): timely preparation for future eventualities.
Then I suggest you might want to consider what happens next if they actually succeed
Active characters:
Edoumiaond Willegume "Eddie" Podslee, Vegetanian scholar (College of Spirits bard)
Lan Kidogo, mapach archaeologist and treasure hunter (Knowledge cleric)
Peter "the Pied Piper" Hausler, human con artist/remover of vermin (Circle of the Shepherd druid)
PIPA - Planar Interception/Protection Aeormaton, warforged bodyguard and ex-wizard hunter (Warrior of the Elements monk/Cartographer artificer)
Xhekhetiel, halfling survivor of a Betrayer Gods cult (Runechild sorcerer/fighter)
I have mixed feelings about AI. I've been using technology for many years (since the days of floppy disks and 64k RAM), so I usually appreciate and enjoy advancements. However, I've also been a hobbyist creative, and so I understand how violating and disruptive these tools feel to the artistic world. This has made me wait quite a while before trying out AI.
I have been using it mostly to supplement my work as a DM. I find it helpful to beef up my homebrew worlds. It can quickly provide details on town locations complete with NPC names, shop inventories, and tavern menus. These details used to take me hours to flesh out, for maybe 5 minutes of actual game play...if they are even utilized at all. Sure, I could reuse them in later sessions, but with proper prompts, these extras can be flavored for the place, tone, and activities planned for the current session in no time at all. It's really helped free up more of my time to focus on creating the stories and more important elements (plus my job and housework).
I only run a game for my friends and family, but they are enjoying the sessions and like having lots of details and options provided without me having to overwork myself with prep time. Now, if I were a paid DM, it might be different. I could understand how players who are paying for creativity might feel cheated if the DM were using AI for their world building. Same goes for AI use in materials that are sold to players, like official D&D content -- it doesn't seem right to charge for it while pretending it was created by humans.
When used in moderation -- as with most other things -- it's been a positive for me, though not without some conflicted feelings at times.
Do you mind explaining what you mean?
Providence (n): timely preparation for future eventualities.
Davyd covered it pretty well already -- just scale up the tragedies and atrocities, once the main source of "information" for way too many people becomes whatever their chatbot tells them
Active characters:
Edoumiaond Willegume "Eddie" Podslee, Vegetanian scholar (College of Spirits bard)
Lan Kidogo, mapach archaeologist and treasure hunter (Knowledge cleric)
Peter "the Pied Piper" Hausler, human con artist/remover of vermin (Circle of the Shepherd druid)
PIPA - Planar Interception/Protection Aeormaton, warforged bodyguard and ex-wizard hunter (Warrior of the Elements monk/Cartographer artificer)
Xhekhetiel, halfling survivor of a Betrayer Gods cult (Runechild sorcerer/fighter)