I'll give this thread a chance to get back on topic as we do understand this is a concerning issue for some of our players, but if it derails again into being nonconstructive and off-topic arguments, we will have to lock it. Please also keep in mind our rules:
6.1 • Company Criticism
Public criticism of Wizards of the Coast (Or Hasbro in this case) is not itself against the rules; however, users are still responsible for ensuring their conduct does not violate any other site rules.
Possible violations include, but are not limited to:
Threats of physical violence.
Doxxing (publicly sharing another person's private information).
Name-calling, insults, and other forms of harassment.
Swearing and other language not suitable for an all-ages audience.
And that the topic must stay related to D&D. We understand that it's natural to want to get into a discussion focusing mainly on AI itself, but please make sure you are relating that back to D&D. You can always suggest resources for further reading if it's going to go off topic. Likewise, try to make sure you're not going too deep into speculation without linking back to current evidence. We understand being concerned and sharing those concerns, but if a topic looks like it's only serving to generate toxicity and panic without trying to ground it in reality or have a goal in mind for discussion, the thread will have to be locked.
And if you find yourself constantly replying to one specific other user, consider if this is still about the topic or more about debating them and if you need to step back and disengage. If someone has broken the above rules, engaging with such is only going to escalate it, not solve it. We will not tolerate users harassing and disrespecting each other over this.
The most important takeaway from Cocks’ most recent set of comments: He is not saying anything new. At its core, he is reiterating the same basic points in all his prior commentary on AI’s future in D&D. That consistency is a good thing - it means their goal remains focused, tailored, and is not spiraling out of control as corporate AI decisions often do.
Most importantly, he still is not discussing using generative AI in the production of official D&D materials. As with every other interview, his vision for AI in D&D remains optional player and DM-facing generative tools. This means those who do not wish to interface with AI will not be required to do so - you can continue to support the AI-free base game without opting into AI generated elements.
Now, let us look at the products he is proposing. Nothing he proposes is novel - people already use AI to generate campaign art, assist with preparation, or even develop larger aspects of the campaign. What does that mean? There are lots of problems with AI - ethical, legal, environmental, etc. - but those problems exist regardless of who performs the generation AI. For environmental damage, the damage is going to be done regardless - folks are not going to stop using AI to assist with their games, so Wizards releasing their own product likely will result in little to no additional damage than already was to likely occur. The legal and ethical concerns are also less with a Wizards-owned product - at least they would be training their models on their own copyrighted materials, as opposed to a third-party flagrantly training an AI on Wizards’ property. A Wizards-produced generative AI might not be perfect - but there are a lot of ways it is better than the current situation.
Turning to the actual content Wizards is working on, there is no question generative AI can be a useful tool to DMs. It is no secret D&D struggles from a DMing shortage - five decades of efforts have resulted in little to no change in the fundamental fact only about 20% of players DM, and not all of them are DMing at all the times. A large portion of this is a perception problem - the amount of effort required to DM is more than being a player. Moreover, being a DM requires a lot of different skills and tasks - encounter design, storytelling, worldbuilding, dungeon design, etc. - not all of which everyone likes. I, for example, know some folks who enjoy the idea of building a world and telling a story in it… but they do not enjoy the mechanical elements required to make that work. Finally, there is the aspect of time management - even a DM who might have all the necessary skills might prefer to spend their time designing something like an epic dungeon… instead of a half dozen throw away NPCs the players might only exchange a few sentences with.
Those are all things generative AI can help with - reducing the barrier to entry for potential DMs by lightening the load and making up for their perceived shortcomings, as well as providing experienced DMs a tool to better focus their particularized skills. Those are longstanding problems decades of work cannot fix - which is exactly the kind of problem where it makes sense to apply generative AI. As Cocks talks about it, their goal is less to create a full experience that sidesteps the human element - their goal is to create a set of tools to augment and support the human element.
All of that is greatly improved if the genAI is trained by Wizards and for Wizards game - it helps keep the AI focused and providing helpful, tailored results, resulting in a better, more useful product.
Now, I am sure there are folks who are upset because some of the human element is being replaced - but the reality? We have been replacing that human element for five decades now - replacing it with “I just will not DM.” Given the DM shortage, I am hard pressed to say “human DM with AI tools” is a worse situation than no DM at all, or people who want to DM being unable to do so because they lack time or a mind for some aspects of DMing.
Overall, am I happy we live in a world where generative AI holds all kinds of problems? No. Would I use these tools? Perhaps for art - my drawing skills leave much to be desired - but not for other content. Am I going to begrudge Wizards for reading reality and realizing “someone will do this, it might as well be us?” Of course not, particularly since it adds no additional harm and as they remain committed to it being an optional player-facing tool and not an element of the game’s code design.
Livermore labs recently managed to output more energy through nuclear fusion than was put into the system.
That has nothing to do with AI*, and "maybe fusion is actually going to work this time" doesn't solve the energy problems, since even if it works, it's years away from production, and decades from ubiquity.
* I suppose it's possible that machine learning was involved in some way, but that's not what people are talking about when they say "AI" these days. It's also an actually useful use of the tech, unlike generative systems.
It may not be what _you_ are talking about when you talk about AI, but it is clearly what other people mean.
In a conversation about generative AI, references to "AI" may be assumed to be about generative.
Furthermore, generative systems are what's being discussed in the vast majority of the public conversation on the topic.
In either event, attributing successes of machine learning to "AI" in a conversation about generative systems is misdirection. Most machine learning systems are specialized to task, and do things that the LLM chatbots everybody's talking about can't do reliably, if at all. ChatGPT cannot design or operate a fusion reactor, and never will be able to, and if anybody's gonna try, I'd like some advance warning. :)
By the way, I guess I missed it, but when did you get your graduate degree in AI or any closely related field? My graduate work was in Systems Architecture and Engineering at Viterbi. It’d be nice to confirm that you actually have relevant graduate education in the field before you do anything as brash as telling people they are using the wrong language.
"Systems Architecture and Engineering" doesn't necessarily have anything to do with AI or machine learning. (And, since you didn't say you did research on the subject, I must assume you didn't.)
Anyway, credentialism is irrelevant to a conversation about what people in general are talking about when they say "AI". Conflating the different types of systems is the enemy of clarity.
Given the tendency of people who are actively working in the field to make wildly unsupportable claims about the tech, I think credentials might even be counterproductive to useful conversation. :)
I've iterated this previously, and every time I use something claiming to generate content by request using "AI", it reinforces the notion that AI doesn't understand nuance. The responses still get increasingly worse with the higher specificity of my requests.
It primarily looks for keywords and very little to do with grammar. (I can get a D&D backstory by just typing "dragonborn backstory".) Grammar makes all the difference to Humans. An AI might interpret a saying I like to say as the same sentence: "Do not what you'd regret doing. Do what you'd regret not doing." (I don't know if anyone said it before I did, but I've been saying this for a couple of decades at the least. Words to live by, I think they are.) Moving one word changed the nuance of the statements.
I remember when Chat-GPT couldn't count the number of Rs in Strawberry and in Raspberry and Bs in Bubble while correctly counting Ns in antennae just a year ago, but all the time, it would spell the words correctly. I narrowed it down to what seemed like parsing errors. I managed to get it to tell me how many Rs are between the E and Y, and it told me there was one R even though it stated that the E is in the 7th position and the Y is 10th position and R was the only letter between them.
Reliance on AI without close monitoring is going to return foundational errors, even for something as simple as asking AI to verify something that's obvious to people.
Shackling the AI to prevent those errors is a crude, brutal hack and not an actual solution to the issues, but that's what has been happening more and more lately. It's easier to just tell the application to not do something than to try to figure out why it did it in the first place and see how to organically train it out of that response. We've come across more than a few instances when shackling AI changed the problem from one thing to another equal but opposite problem.
For the services that tap into Internet searches, the common generative AI doesn't know the difference between fact, a rumor, an opinion, and a lie.
AI is a tool still in its infancy with coarse safety measures in place that put blinders on it. Be careful in its use. Keep both hands on the AI wheel at all times.
Swing a hammer without looking and someone's finger is going to get smashed.
(I will state this. Microsoft's AI is actively avoiding content that is not specifically licensed to Microsoft for its use or is not verified as free public use. Its art generation is limited for it, but at least, it's not digging up people's stuff without their permissions. I found this surprising after their irresponsible and disastrous release of the Tay chatbot on Twitter.)
Human. Male. Possibly. Don't be a divider. My characters' backgrounds are written like instruction manuals rather than stories. My opinion and preferences don't mean you're wrong. I am 99.7603% convinced that the digital dice are messing with me. I roll high when nobody's looking and low when anyone else can see.🎲 “It's a bit early to be thinking about an epitaph. No?” will be my epitaph.
To post a comment, please login or register a new account.
I never said it was a peer-reviewed scientific journal.
For someone insisting others provide articles where are all the articles that support those mights of yours?
I'll give this thread a chance to get back on topic as we do understand this is a concerning issue for some of our players, but if it derails again into being nonconstructive and off-topic arguments, we will have to lock it. Please also keep in mind our rules:
6.1 • Company Criticism
Public criticism of Wizards of the Coast (Or Hasbro in this case) is not itself against the rules; however, users are still responsible for ensuring their conduct does not violate any other site rules.
Possible violations include, but are not limited to:
And that the topic must stay related to D&D. We understand that it's natural to want to get into a discussion focusing mainly on AI itself, but please make sure you are relating that back to D&D. You can always suggest resources for further reading if it's going to go off topic. Likewise, try to make sure you're not going too deep into speculation without linking back to current evidence. We understand being concerned and sharing those concerns, but if a topic looks like it's only serving to generate toxicity and panic without trying to ground it in reality or have a goal in mind for discussion, the thread will have to be locked.
And if you find yourself constantly replying to one specific other user, consider if this is still about the topic or more about debating them and if you need to step back and disengage. If someone has broken the above rules, engaging with such is only going to escalate it, not solve it. We will not tolerate users harassing and disrespecting each other over this.
D&D Beyond ToS || D&D Beyond Support
The most important takeaway from Cocks’ most recent set of comments: He is not saying anything new. At its core, he is reiterating the same basic points in all his prior commentary on AI’s future in D&D. That consistency is a good thing - it means their goal remains focused, tailored, and is not spiraling out of control as corporate AI decisions often do.
Most importantly, he still is not discussing using generative AI in the production of official D&D materials. As with every other interview, his vision for AI in D&D remains optional player and DM-facing generative tools. This means those who do not wish to interface with AI will not be required to do so - you can continue to support the AI-free base game without opting into AI generated elements.
Now, let us look at the products he is proposing. Nothing he proposes is novel - people already use AI to generate campaign art, assist with preparation, or even develop larger aspects of the campaign. What does that mean? There are lots of problems with AI - ethical, legal, environmental, etc. - but those problems exist regardless of who performs the generation AI. For environmental damage, the damage is going to be done regardless - folks are not going to stop using AI to assist with their games, so Wizards releasing their own product likely will result in little to no additional damage than already was to likely occur. The legal and ethical concerns are also less with a Wizards-owned product - at least they would be training their models on their own copyrighted materials, as opposed to a third-party flagrantly training an AI on Wizards’ property. A Wizards-produced generative AI might not be perfect - but there are a lot of ways it is better than the current situation.
Turning to the actual content Wizards is working on, there is no question generative AI can be a useful tool to DMs. It is no secret D&D struggles from a DMing shortage - five decades of efforts have resulted in little to no change in the fundamental fact only about 20% of players DM, and not all of them are DMing at all the times. A large portion of this is a perception problem - the amount of effort required to DM is more than being a player. Moreover, being a DM requires a lot of different skills and tasks - encounter design, storytelling, worldbuilding, dungeon design, etc. - not all of which everyone likes. I, for example, know some folks who enjoy the idea of building a world and telling a story in it… but they do not enjoy the mechanical elements required to make that work. Finally, there is the aspect of time management - even a DM who might have all the necessary skills might prefer to spend their time designing something like an epic dungeon… instead of a half dozen throw away NPCs the players might only exchange a few sentences with.
Those are all things generative AI can help with - reducing the barrier to entry for potential DMs by lightening the load and making up for their perceived shortcomings, as well as providing experienced DMs a tool to better focus their particularized skills. Those are longstanding problems decades of work cannot fix - which is exactly the kind of problem where it makes sense to apply generative AI. As Cocks talks about it, their goal is less to create a full experience that sidesteps the human element - their goal is to create a set of tools to augment and support the human element.
All of that is greatly improved if the genAI is trained by Wizards and for Wizards game - it helps keep the AI focused and providing helpful, tailored results, resulting in a better, more useful product.
Now, I am sure there are folks who are upset because some of the human element is being replaced - but the reality? We have been replacing that human element for five decades now - replacing it with “I just will not DM.” Given the DM shortage, I am hard pressed to say “human DM with AI tools” is a worse situation than no DM at all, or people who want to DM being unable to do so because they lack time or a mind for some aspects of DMing.
Overall, am I happy we live in a world where generative AI holds all kinds of problems? No. Would I use these tools? Perhaps for art - my drawing skills leave much to be desired - but not for other content. Am I going to begrudge Wizards for reading reality and realizing “someone will do this, it might as well be us?” Of course not, particularly since it adds no additional harm and as they remain committed to it being an optional player-facing tool and not an element of the game’s code design.
In a conversation about generative AI, references to "AI" may be assumed to be about generative.
Furthermore, generative systems are what's being discussed in the vast majority of the public conversation on the topic.
In either event, attributing successes of machine learning to "AI" in a conversation about generative systems is misdirection. Most machine learning systems are specialized to task, and do things that the LLM chatbots everybody's talking about can't do reliably, if at all. ChatGPT cannot design or operate a fusion reactor, and never will be able to, and if anybody's gonna try, I'd like some advance warning. :)
"Systems Architecture and Engineering" doesn't necessarily have anything to do with AI or machine learning. (And, since you didn't say you did research on the subject, I must assume you didn't.)
Anyway, credentialism is irrelevant to a conversation about what people in general are talking about when they say "AI". Conflating the different types of systems is the enemy of clarity.
Given the tendency of people who are actively working in the field to make wildly unsupportable claims about the tech, I think credentials might even be counterproductive to useful conversation. :)
I've iterated this previously, and every time I use something claiming to generate content by request using "AI", it reinforces the notion that AI doesn't understand nuance. The responses still get increasingly worse with the higher specificity of my requests.
It primarily looks for keywords and very little to do with grammar. (I can get a D&D backstory by just typing "dragonborn backstory".) Grammar makes all the difference to Humans. An AI might interpret a saying I like to say as the same sentence: "Do not what you'd regret doing. Do what you'd regret not doing." (I don't know if anyone said it before I did, but I've been saying this for a couple of decades at the least. Words to live by, I think they are.) Moving one word changed the nuance of the statements.
I remember when Chat-GPT couldn't count the number of Rs in Strawberry and in Raspberry and Bs in Bubble while correctly counting Ns in antennae just a year ago, but all the time, it would spell the words correctly. I narrowed it down to what seemed like parsing errors. I managed to get it to tell me how many Rs are between the E and Y, and it told me there was one R even though it stated that the E is in the 7th position and the Y is 10th position and R was the only letter between them.
Reliance on AI without close monitoring is going to return foundational errors, even for something as simple as asking AI to verify something that's obvious to people.
Shackling the AI to prevent those errors is a crude, brutal hack and not an actual solution to the issues, but that's what has been happening more and more lately. It's easier to just tell the application to not do something than to try to figure out why it did it in the first place and see how to organically train it out of that response. We've come across more than a few instances when shackling AI changed the problem from one thing to another equal but opposite problem.
For the services that tap into Internet searches, the common generative AI doesn't know the difference between fact, a rumor, an opinion, and a lie.
AI is a tool still in its infancy with coarse safety measures in place that put blinders on it. Be careful in its use. Keep both hands on the AI wheel at all times.
Swing a hammer without looking and someone's finger is going to get smashed.
(I will state this. Microsoft's AI is actively avoiding content that is not specifically licensed to Microsoft for its use or is not verified as free public use. Its art generation is limited for it, but at least, it's not digging up people's stuff without their permissions. I found this surprising after their irresponsible and disastrous release of the Tay chatbot on Twitter.)
Human. Male. Possibly. Don't be a divider.
My characters' backgrounds are written like instruction manuals rather than stories. My opinion and preferences don't mean you're wrong.
I am 99.7603% convinced that the digital dice are messing with me. I roll high when nobody's looking and low when anyone else can see.🎲
“It's a bit early to be thinking about an epitaph. No?” will be my epitaph.