Someone at WoTC please tell Chris Cocks to go lay down.
In a March 1st interview with Venturebeats Cocks states he forsees a future where AI is generating content.
Absolutely not! The OGL debacle should have been a lesson in the community not putting up with nonsense. The blowback on AI tools used to touch up art in Bigby's should have been a lesson, uet here we are.
Hasbro needs to sell WoTC or Chris Vocks needs to either leave or stay out of it. He clearly is not friendly to this community.
Of course it will go to AI. What do you think a video game with a story line is?
There will always be a portion of the population who wants to play the like its a video game. The toe dippers will play an AI version first then decide to get into a RL group. There will always be those DM's who want the AI machine to fill in the details and descriptions of those things they just do not want to do.
Either Hasbro does it first and best or someone else does it and forces Hasbro to either buy them out of make their own behind everyone else.
Here is a link to the interview. The interview is not as alarming as the OP makes it seem.
There are two segments of the interview with discuss AI. The first segment discusses actual plans for the usage of AI in games and the second discusses Cocks’ personal usage of AI in his personal D&D game.
In the first section, Cocks discusses the possibility of using AI to support existing products - much like they have an AI Ouija board and an AI question generator for Trivial Pursuit. Cocks is pretty clear that, in the D&D sphere, they are looking at ways to make “integration” of physical and digital play easier. Never once does he say they are considering making content.
AI powered digital tools are not all that alarming. Something like an AI encounter builder or an AI powered dungeon map generator make a lot of sense - they are not really generating “content” in the form of rules text or art (other than maps), just providing DMs a tool to make prep time easier. Given the five-decades old DM shortage problem, integrating AI in a manner which helps reduce some of the busywork component of D&D seems like a fairly sensible choice.
The next section talks about Cocks’ personal use of AI - he apparently uses the Bing image generator to make art for his characters. He is very, very clear in this section that, while he might use AI for his personal campaigns, his generating content for D&D “doesn’t have anything to do with work.” This raises a bit of a yellow flag for me - it is not that far of a jump from “this works for me, but does not belong in my work” to “well, what if it did belong in my work?” but I am not going to miss any sleep over a potential future problem, when the clear reading is that generative tools are not presently being workshopped.
Overall, this seems like another example where any alarm comes more from Cocks’ staggering lack of PR common sense than from anything he said. At no point does he ever say they are planning on “generating content” for D&D; and he says they are not working on generating content for AI… but does so in a roundabout way. Had he taken the time to say the singular sentence “Just to be clear, we are working on AI tools to help DMs, and are not planning on generating content for our games” then, perhaps, people would not be as concerned.
But, he didn’t. At this point, he really should know better - the D&D community is exceptional at reading things that are not actually said and at completely missing any subtext which requires critical reading.
Someone at WoTC please tell Chris Cocks to go lay down.
In a March 1st interview with Venturebeats Cocks states he forsees a future where AI is generating content.
Absolutely not! The OGL debacle should have been a lesson in the community not putting up with nonsense. The blowback on AI tools used to touch up art in Bigby's should have been a lesson, uet here we are.
Hasbro needs to sell WoTC or Chris Vocks needs to either leave or stay out of it. He clearly is not friendly to this community.
Please provide a link to this interview/article so users can form a first hand impression of what was actually said.
They talk a fair amount about ethical use of AI - using their own material to power it for example, rather than unlicensed material.
AI is inevitable. It's going to happen. What we can do now is guide the market so it uses it ethically and responsibly. It's important that we reward ethical use of it now, rather than trying to hold back the inevitable, succeed for a few years, then it comes back and without that ethical foundation.
It's important that we don't overreact and just push back against unethical use rather than its use altogether in blind reactionaryism, or it'll be worse in the long term.
Rollback Post to RevisionRollBack
If you're not willing or able to to discuss in good faith, then don't be surprised if I don't respond, there are better things in life for me to do than humour you. This signature is that response.
There are ethical ways to develop and use these tools. I think WotC creating a designer sidekick AI for DM's would be a revolution (and can be done with their materials alone).
AI DMs can address the DM challenge, but I'm hopeful that WotC will start focusing on training a Dungeon Master Corps and making it easier to learn how to DM confidently. That is the sweet spot of the business, as it'll drive adoption and retention.
I can see an AI DM down the line. Unless a lot more human DMs arrive on the scene suddenly, and I don't see that happening any time soon. They have made playing more fun than DMing, or it always was. It might take some work and trail runs to get all the bugs out but it could be a thing one day.
I can't speak to how much more fun playing is than DMing (YMMV) but playing is certainly easier. There are a lot of hurdles to DMing that WotC can shrink or remove, and I can easily see AI being beneficial for at least some of them.
I can see an AI DM down the line. Unless a lot more human DMs arrive on the scene suddenly, and I don't see that happening any time soon. They have made playing more fun than DMing, or it always was. It might take some work and trail runs to get all the bugs out but it could be a thing one day.
Honestly, I don’t see AI DMs really being a thing until we get a massive paradigm shift in how AI function. An AI cannot actually think critically and consider “does this make sense” or “how do I adapt to this unforeseen situation or interaction”? It just stitches together a response that an algorithm determines to be most likely to be positively received. If you look over some examples of AI generated character backstories, you can typically pick out numerous inconsistencies or contradictions. It’s a useful tool, but it’s about as ready to stand in for a DM as cruise control is to stand in for a driver.
As long as the end users are made aware of it and given the ability to opt out of using it, I have no issues with it. I will say I do not feel there is an ethical use for "ai", it's very nature is to be unethical. JMHO.
Rollback Post to RevisionRollBack
CENSORSHIP IS THE TOOL OF COWARDS and WANNA BE TYRANTS.
I can see an AI DM down the line. Unless a lot more human DMs arrive on the scene suddenly, and I don't see that happening any time soon. They have made playing more fun than DMing, or it always was. It might take some work and trail runs to get all the bugs out but it could be a thing one day.
Honestly, I don’t see AI DMs really being a thing until we get a massive paradigm shift in how AI function. An AI cannot actually think critically and consider “does this make sense” or “how do I adapt to this unforeseen situation or interaction”? It just stitches together a response that an algorithm determines to be most likely to be positively received. If you look over some examples of AI generated character backstories, you can typically pick out numerous inconsistencies or contradictions. It’s a useful tool, but it’s about as ready to stand in for a DM as cruise control is to stand in for a driver.
I tried D&D with ChatGPT a while back. In one case it was able to generate a very brief and uneventful encounter. In other case I asked it to run a fight between a lv1 cleric and a goblin.
With that cleric v goblin, let's just say ChatGPT is both a cheater powergamer and dumb. Cheating power gamer in the sense that it wanted the level one cleric to cast Spiritual Weapon followed with Cure Wounds (might've been a different leveled spell.) In other word, cheating on what levels of spell the character could cast and breaking the bonus action spell means action is for a cantrip. And dumb in the sense that it had the character and goblin do a lot of pointless dodging.
Though in the Play by Post I know a while back there was someone that had tweaked a more current ChatGPT to see if he could run a D&D campaign with it. Haven't checked how that's going though.
I don't see AI DMs with anything close to current tech... but AI DM assistance tools are already here, for things like maps and NPC portraits, and I can easily see ways of using it for other purposes, such as fleshing out bios for minor NPCs (in general, the more specific your needs are, the harder it will be to get AI to produce what you want, but when what you need is "a shopkeeper in Waterdeep", having a generative AI produce a paragraph of bafflegab might be just what you want). You could probably do it for encounter generation, but to be honest, "use XGTE rules to generate an undead-themed encounter for five level 7 PCs" doesn't require AI, it just requires a database and some tagging, though AI would be handy for natural language processing.
I can see an AI DM down the line. Unless a lot more human DMs arrive on the scene suddenly, and I don't see that happening any time soon. They have made playing more fun than DMing, or it always was. It might take some work and trail runs to get all the bugs out but it could be a thing one day.
Honestly, I don’t see AI DMs really being a thing until we get a massive paradigm shift in how AI function. An AI cannot actually think critically and consider “does this make sense” or “how do I adapt to this unforeseen situation or interaction”? It just stitches together a response that an algorithm determines to be most likely to be positively received. If you look over some examples of AI generated character backstories, you can typically pick out numerous inconsistencies or contradictions. It’s a useful tool, but it’s about as ready to stand in for a DM as cruise control is to stand in for a driver.
I tried D&D with ChatGPT a while back. In one case it was able to generate a very brief and uneventful encounter. In other case I asked it to run a fight between a lv1 cleric and a goblin.
With that cleric v goblin, let's just say ChatGPT is both a cheater powergamer and dumb. Cheating power gamer in the sense that it wanted the level one cleric to cast Spiritual Weapon followed with Cure Wounds (might've been a different leveled spell.) In other word, cheating on what levels of spell the character could cast and breaking the bonus action spell means action is for a cantrip. And dumb in the sense that it had the character and goblin do a lot of pointless dodging.
That's because despite companies advertising current AI as being something akin to what you'd expect from Star Trek or Star Wars, all we have at present are very complex word association programs that can string words together based on how it's seen words strung together in the content it was trained on. It doesn't know what those words mean.
That's not going to change any time soon, and it's likely to get worse because it's starting to get to the point where AIs are being trained on content generated by other AIs.
Rollback Post to RevisionRollBack
Find your own truth, choose your enemies carefully, and never deal with a dragon.
"Canon" is what's factual to D&D lore. "Cannon" is what you're going to be shot with if you keep getting the word wrong.
D&D has been using random tables to generate content for decades. There's rules in the DMG where you can make a dungeon, villain, etc. just by randomly rolling the dice and seeing what comes up. What's the difference between that and an AI?
I'll tell you one difference. With an AI you don't have to pay it $50 per book. >:)
Seriously though, AI is just a tool that you can or can not use, depending on your preferences. And if you're the type that doesn't want any AI being used, be sure to pay the content creators for the content you use.
D&D has been using random tables to generate content for decades. There's rules in the DMG where you can make a dungeon, villain, etc. just by randomly rolling the dice and seeing what comes up. What's the difference between that and an AI?
I'll tell you one difference. With an AI you don't have to pay it $50 per book. >:)
Seriously though, AI is just a tool that you can or can not use, depending on your preferences. And if you're the type that doesn't want any AI being used, be sure to pay the content creators for the content you use.
There's cost, and then there's COST.
Rollback Post to RevisionRollBack
CENSORSHIP IS THE TOOL OF COWARDS and WANNA BE TYRANTS.
I will say I do not feel there is an ethical use for "ai", it's very nature is to be unethical. JMHO.
What's so fundamental and intrinsic to AI use that's unethical, that you can't change to make it ethical? Every criticism I've heard has been how it's used (basically copyright infringement) rather than AI per se. The only thing I can see is that artists etc might lose their jobs, but while that's a shame and I have sympathy for them, given the amount of jobs lost to automation in our lives, it'd be oddly selective to criticise AI for that.
Rollback Post to RevisionRollBack
If you're not willing or able to to discuss in good faith, then don't be surprised if I don't respond, there are better things in life for me to do than humour you. This signature is that response.
I will say I do not feel there is an ethical use for "ai", it's very nature is to be unethical. JMHO.
What's so fundamental and intrinsic to AI use that's unethical, that you can't change to make it ethical? Every criticism I've heard has been how it's used (basically copyright infringement) rather than AI per se. The only thing I can see is that artists etc might lose their jobs, but while that's a shame and I have sympathy for them, given the amount of jobs lost to automation in our lives, it'd be oddly selective to criticise AI for that.
The problem is that in a lot of ways being unethical is baked into how it works. To get a large enough sample to actually work they have to pool thousands, if not millions, of sources. Each one of those is essentially stealing someone elses work and talent. Sure you could actually make it ethical by honouring copyright but that would result in you having to pay every single creator and get them to agree, making it both too costly and too time consuming to be viable, and an awful lot of artists would say no no matter how much you offered them. Throw in as well that most of the people and companies creating AI tools seem to have a really shakey grasp of ethics and don't see anything wrong with stealing other people's work to get their algorithms to work and you'll never get an ethical AI on the current business model
I will say I do not feel there is an ethical use for "ai", it's very nature is to be unethical. JMHO.
What's so fundamental and intrinsic to AI use that's unethical, that you can't change to make it ethical? Every criticism I've heard has been how it's used (basically copyright infringement) rather than AI per se. The only thing I can see is that artists etc might lose their jobs, but while that's a shame and I have sympathy for them, given the amount of jobs lost to automation in our lives, it'd be oddly selective to criticise AI for that.
The problem is that in a lot of ways being unethical is baked into how it works. To get a large enough sample to actually work they have to pool thousands, if not millions, of sources. Each one of those is essentially stealing someone elses work and talent. Sure you could actually make it ethical by honouring copyright but that would result in you having to pay every single creator and get them to agree, making it both too costly and too time consuming to be viable, and an awful lot of artists would say no no matter how much you offered them. Throw in as well that most of the people and companies creating AI tools seem to have a really shakey grasp of ethics and don't see anything wrong with stealing other people's work to get their algorithms to work and you'll never get an ethical AI on the current business model
There are also concerns about AI's environmental impact because it's extremely power and water intensive. Microsoft's Seattle campus has IIRC tripled its water usage since the company started AI research, because the water is used to cool the servers running the AI programs.
Rollback Post to RevisionRollBack
Find your own truth, choose your enemies carefully, and never deal with a dragon.
"Canon" is what's factual to D&D lore. "Cannon" is what you're going to be shot with if you keep getting the word wrong.
I will say I do not feel there is an ethical use for "ai", it's very nature is to be unethical. JMHO.
What's so fundamental and intrinsic to AI use that's unethical, that you can't change to make it ethical? Every criticism I've heard has been how it's used (basically copyright infringement) rather than AI per se. The only thing I can see is that artists etc might lose their jobs, but while that's a shame and I have sympathy for them, given the amount of jobs lost to automation in our lives, it'd be oddly selective to criticise AI for that.
The problem is that in a lot of ways being unethical is baked into how it works. To get a large enough sample to actually work they have to pool thousands, if not millions, of sources. Each one of those is essentially stealing someone elses work and talent. Sure you could actually make it ethical by honouring copyright but that would result in you having to pay every single creator and get them to agree, making it both too costly and too time consuming to be viable, and an awful lot of artists would say no no matter how much you offered them. Throw in as well that most of the people and companies creating AI tools seem to have a really shakey grasp of ethics and don't see anything wrong with stealing other people's work to get their algorithms to work and you'll never get an ethical AI on the current business model
That and if the initial training data for the Ai is biased or improperly applied ( wrong format or error in entry ), then the issue of training material updates, reevaluation and possible further changes that may introduce more biased outcomes, and one quickly realizes Ai can be useful in some cases, but as a GM or even a DM, no Ai will ever be ethical or productive enough to be able to handle the task.
Now, if it’s used to help aid and improve the speed and quality of a living GM/DM, as an aid and support in the process of creation, then we’ve had such Artificial Integration for quite some time, ethical issues and practices kept right where it belongs, end user experience and responsibility.
Honestly I don’t understand how it can be unethical use if it’s referencing millions of different data points, anymore than an artist is unethical for studying famous works of a style they want to use or an author is unethical for looking at how other authors write certain scenes. The vast majority of art is built on what came before, this is just the next technological extension of that. Obviously if you exclusively train it on a single artist it’ll just imitate them, but once you’ve got several dozen in the mix can it really be said to be taking enough from any single one to be stealing?
Honestly I don’t understand how it can be unethical use if it’s referencing millions of different data points, anymore than an artist is unethical for studying famous works of a style they want to use or an author is unethical for looking at how other authors write certain scenes. The vast majority of art is built on what came before, this is just the next technological extension of that. Obviously if you exclusively train it on a single artist it’ll just imitate them, but once you’ve got several dozen in the mix can it really be said to be taking enough from any single one to be stealing?
Regardless of whether the end result is a direct copy of the original or the product of millions of data points it's still generating income by using the work of others without acknowledgement or payment, and doing so in a much more direct way than a real life artist who spent years practicing and learning a craft by looking at the work of others. No matter how much influence a real artist takes from others the end result is still their own craft, hardwork and talent and even without influences they might well have still become a talented artist just from practice. Meanwhile AI without stealing from others doesn't exist, it's a parasite with no talent and worse it's a parasite trying to convince people that the people it's stealing from aren't needed.
And that's just the ethics of it's sources and creative algorithm, as others have said there's huge ethical concerns around it's resource intensive existance and environmental problems associated with server farms that big
Someone at WoTC please tell Chris Cocks to go lay down.
In a March 1st interview with Venturebeats Cocks states he forsees a future where AI is generating content.
Absolutely not! The OGL debacle should have been a lesson in the community not putting up with nonsense. The blowback on AI tools used to touch up art in Bigby's should have been a lesson, uet here we are.
Hasbro needs to sell WoTC or Chris Vocks needs to either leave or stay out of it. He clearly is not friendly to this community.
Of course it will go to AI.
What do you think a video game with a story line is?
There will always be a portion of the population who wants to play the like its a video game.
The toe dippers will play an AI version first then decide to get into a RL group.
There will always be those DM's who want the AI machine to fill in the details and descriptions of those things they just do not want to do.
Either Hasbro does it first and best or someone else does it and forces Hasbro to either buy them out of make their own behind everyone else.
Here is a link to the interview. The interview is not as alarming as the OP makes it seem.
There are two segments of the interview with discuss AI. The first segment discusses actual plans for the usage of AI in games and the second discusses Cocks’ personal usage of AI in his personal D&D game.
In the first section, Cocks discusses the possibility of using AI to support existing products - much like they have an AI Ouija board and an AI question generator for Trivial Pursuit. Cocks is pretty clear that, in the D&D sphere, they are looking at ways to make “integration” of physical and digital play easier. Never once does he say they are considering making content.
AI powered digital tools are not all that alarming. Something like an AI encounter builder or an AI powered dungeon map generator make a lot of sense - they are not really generating “content” in the form of rules text or art (other than maps), just providing DMs a tool to make prep time easier. Given the five-decades old DM shortage problem, integrating AI in a manner which helps reduce some of the busywork component of D&D seems like a fairly sensible choice.
The next section talks about Cocks’ personal use of AI - he apparently uses the Bing image generator to make art for his characters. He is very, very clear in this section that, while he might use AI for his personal campaigns, his generating content for D&D “doesn’t have anything to do with work.” This raises a bit of a yellow flag for me - it is not that far of a jump from “this works for me, but does not belong in my work” to “well, what if it did belong in my work?” but I am not going to miss any sleep over a potential future problem, when the clear reading is that generative tools are not presently being workshopped.
Overall, this seems like another example where any alarm comes more from Cocks’ staggering lack of PR common sense than from anything he said. At no point does he ever say they are planning on “generating content” for D&D; and he says they are not working on generating content for AI… but does so in a roundabout way. Had he taken the time to say the singular sentence “Just to be clear, we are working on AI tools to help DMs, and are not planning on generating content for our games” then, perhaps, people would not be as concerned.
But, he didn’t. At this point, he really should know better - the D&D community is exceptional at reading things that are not actually said and at completely missing any subtext which requires critical reading.
Please provide a link to this interview/article so users can form a first hand impression of what was actually said.
Find my D&D Beyond articles here
They talk a fair amount about ethical use of AI - using their own material to power it for example, rather than unlicensed material.
AI is inevitable. It's going to happen. What we can do now is guide the market so it uses it ethically and responsibly. It's important that we reward ethical use of it now, rather than trying to hold back the inevitable, succeed for a few years, then it comes back and without that ethical foundation.
It's important that we don't overreact and just push back against unethical use rather than its use altogether in blind reactionaryism, or it'll be worse in the long term.
If you're not willing or able to to discuss in good faith, then don't be surprised if I don't respond, there are better things in life for me to do than humour you. This signature is that response.
AI is being used now. It's not a future thing.
There are ethical ways to develop and use these tools. I think WotC creating a designer sidekick AI for DM's would be a revolution (and can be done with their materials alone).
AI DMs can address the DM challenge, but I'm hopeful that WotC will start focusing on training a Dungeon Master Corps and making it easier to learn how to DM confidently. That is the sweet spot of the business, as it'll drive adoption and retention.
I can't speak to how much more fun playing is than DMing (YMMV) but playing is certainly easier. There are a lot of hurdles to DMing that WotC can shrink or remove, and I can easily see AI being beneficial for at least some of them.
Honestly, I don’t see AI DMs really being a thing until we get a massive paradigm shift in how AI function. An AI cannot actually think critically and consider “does this make sense” or “how do I adapt to this unforeseen situation or interaction”? It just stitches together a response that an algorithm determines to be most likely to be positively received. If you look over some examples of AI generated character backstories, you can typically pick out numerous inconsistencies or contradictions. It’s a useful tool, but it’s about as ready to stand in for a DM as cruise control is to stand in for a driver.
As long as the end users are made aware of it and given the ability to opt out of using it, I have no issues with it. I will say I do not feel there is an ethical use for "ai", it's very nature is to be unethical. JMHO.
CENSORSHIP IS THE TOOL OF COWARDS and WANNA BE TYRANTS.
I tried D&D with ChatGPT a while back. In one case it was able to generate a very brief and uneventful encounter. In other case I asked it to run a fight between a lv1 cleric and a goblin.
With that cleric v goblin, let's just say ChatGPT is both a cheater powergamer and dumb. Cheating power gamer in the sense that it wanted the level one cleric to cast Spiritual Weapon followed with Cure Wounds (might've been a different leveled spell.) In other word, cheating on what levels of spell the character could cast and breaking the bonus action spell means action is for a cantrip. And dumb in the sense that it had the character and goblin do a lot of pointless dodging.
Though in the Play by Post I know a while back there was someone that had tweaked a more current ChatGPT to see if he could run a D&D campaign with it. Haven't checked how that's going though.
This is a signature. It was a simple signature. But it has been upgraded.
Belolonandalogalo, Sunny | Draíocht, Kholias | Eggo Lass, 100 Dungeons
Talorin Tebedi, Vecna: Eve | Cherry, Stormwreck | Chipper, Strahd
We Are Modron
Get rickrolled here. Awesome music here. Track 48, 5/23/25, Immaculate Mary
I don't see AI DMs with anything close to current tech... but AI DM assistance tools are already here, for things like maps and NPC portraits, and I can easily see ways of using it for other purposes, such as fleshing out bios for minor NPCs (in general, the more specific your needs are, the harder it will be to get AI to produce what you want, but when what you need is "a shopkeeper in Waterdeep", having a generative AI produce a paragraph of bafflegab might be just what you want). You could probably do it for encounter generation, but to be honest, "use XGTE rules to generate an undead-themed encounter for five level 7 PCs" doesn't require AI, it just requires a database and some tagging, though AI would be handy for natural language processing.
That's because despite companies advertising current AI as being something akin to what you'd expect from Star Trek or Star Wars, all we have at present are very complex word association programs that can string words together based on how it's seen words strung together in the content it was trained on. It doesn't know what those words mean.
That's not going to change any time soon, and it's likely to get worse because it's starting to get to the point where AIs are being trained on content generated by other AIs.
Find your own truth, choose your enemies carefully, and never deal with a dragon.
"Canon" is what's factual to D&D lore. "Cannon" is what you're going to be shot with if you keep getting the word wrong.
D&D has been using random tables to generate content for decades. There's rules in the DMG where you can make a dungeon, villain, etc. just by randomly rolling the dice and seeing what comes up. What's the difference between that and an AI?
I'll tell you one difference. With an AI you don't have to pay it $50 per book. >:)
Seriously though, AI is just a tool that you can or can not use, depending on your preferences. And if you're the type that doesn't want any AI being used, be sure to pay the content creators for the content you use.
There's cost, and then there's COST.
CENSORSHIP IS THE TOOL OF COWARDS and WANNA BE TYRANTS.
What's so fundamental and intrinsic to AI use that's unethical, that you can't change to make it ethical? Every criticism I've heard has been how it's used (basically copyright infringement) rather than AI per se. The only thing I can see is that artists etc might lose their jobs, but while that's a shame and I have sympathy for them, given the amount of jobs lost to automation in our lives, it'd be oddly selective to criticise AI for that.
If you're not willing or able to to discuss in good faith, then don't be surprised if I don't respond, there are better things in life for me to do than humour you. This signature is that response.
The problem is that in a lot of ways being unethical is baked into how it works. To get a large enough sample to actually work they have to pool thousands, if not millions, of sources. Each one of those is essentially stealing someone elses work and talent. Sure you could actually make it ethical by honouring copyright but that would result in you having to pay every single creator and get them to agree, making it both too costly and too time consuming to be viable, and an awful lot of artists would say no no matter how much you offered them. Throw in as well that most of the people and companies creating AI tools seem to have a really shakey grasp of ethics and don't see anything wrong with stealing other people's work to get their algorithms to work and you'll never get an ethical AI on the current business model
There are also concerns about AI's environmental impact because it's extremely power and water intensive. Microsoft's Seattle campus has IIRC tripled its water usage since the company started AI research, because the water is used to cool the servers running the AI programs.
Find your own truth, choose your enemies carefully, and never deal with a dragon.
"Canon" is what's factual to D&D lore. "Cannon" is what you're going to be shot with if you keep getting the word wrong.
That and if the initial training data for the Ai is biased or improperly applied ( wrong format or error in entry ), then the issue of training material updates, reevaluation and possible further changes that may introduce more biased outcomes, and one quickly realizes Ai can be useful in some cases, but as a GM or even a DM, no Ai will ever be ethical or productive enough to be able to handle the task.
Now, if it’s used to help aid and improve the speed and quality of a living GM/DM, as an aid and support in the process of creation, then we’ve had such Artificial Integration for quite some time, ethical issues and practices kept right where it belongs, end user experience and responsibility.
But that’s my two copper on the issue.
Honestly I don’t understand how it can be unethical use if it’s referencing millions of different data points, anymore than an artist is unethical for studying famous works of a style they want to use or an author is unethical for looking at how other authors write certain scenes. The vast majority of art is built on what came before, this is just the next technological extension of that. Obviously if you exclusively train it on a single artist it’ll just imitate them, but once you’ve got several dozen in the mix can it really be said to be taking enough from any single one to be stealing?
Regardless of whether the end result is a direct copy of the original or the product of millions of data points it's still generating income by using the work of others without acknowledgement or payment, and doing so in a much more direct way than a real life artist who spent years practicing and learning a craft by looking at the work of others. No matter how much influence a real artist takes from others the end result is still their own craft, hardwork and talent and even without influences they might well have still become a talented artist just from practice. Meanwhile AI without stealing from others doesn't exist, it's a parasite with no talent and worse it's a parasite trying to convince people that the people it's stealing from aren't needed.
And that's just the ethics of it's sources and creative algorithm, as others have said there's huge ethical concerns around it's resource intensive existance and environmental problems associated with server farms that big