Correct, but the data can be used in many ways without the code, it is the code that makes it special.
Um... so? The basic way a LLM works is you feed it a bunch of data and tell it "produce stuff that looks like what we fed you".
Yes, but you don't get to see behind the curtain so to speak thus we have no idea why it "picks" what to use or discard and why. The why is very important, and it is closely guarded.
Rollback Post to RevisionRollBack
CENSORSHIP IS THE TOOL OF COWARDS and WANNA BE TYRANTS.
Yes, but you don't get to see behind the curtain so to speak thus we have no idea why it "picks" what to use or discard and why. The why is very important, and it is closely guarded.
They're neural net programs, no-one knows why. It doesn't pick or discard at all, it just ingests everything.
Yes, but you don't get to see behind the curtain so to speak thus we have no idea why it "picks" what to use or discard and why. The why is very important, and it is closely guarded.
They're neural net programs, no-one knows why. It doesn't pick or discard at all, it just ingests everything.
People do know why. There are plenty examples of it "making" choices and pushing things that are both incorrect and illogical, this is not an area of discussion I am willing to get into any further on this site.
Yes, but you don't get to see behind the curtain so to speak thus we have no idea why it "picks" what to use or discard and why. The why is very important, and it is closely guarded.
They're neural net programs, no-one knows why. It doesn't pick or discard at all, it just ingests everything.
People do know why. There are plenty examples of it "making" choices and pushing things that are both incorrect and illogical, this is not an area of discussion I am willing to get into any further on this site.
In many cases, the “why” of AI is less “closely guarded” and more “actually unknowable.” A well-documented issue with AI is the black box problem - the idea that, once an AI gets going, even its inventors have know way of knowing how it reaches a decision. Essentially, you can tell what was input into the AI and you can tell what it spits out, but no one really tell you how the AI went from point A to point B.
You are correct that AIs can spit out completely bonkers point Bs, but even their inventors often do not know why the AI does that. Often, the best the creators can really do is try to retrain the AI with data that comports with the desired output and hope that, whatever happens in the black box, makes choices from the new data rather than whatever old data set caused wrong answers.
People do know why. There are plenty examples of it "making" choices and pushing things that are both incorrect and illogical, this is not an area of discussion I am willing to get into any further on this site.
If people knew why it was doing that, it wouldn't be doing it.
People do know why. There are plenty examples of it "making" choices and pushing things that are both incorrect and illogical, this is not an area of discussion I am willing to get into any further on this site.
If people knew why it was doing that, it wouldn't be doing it.
They know why in the general case -- LLMs don't "know" anything; they track word correlations.
But if you want to know about any specific thing it outputs, the correlations are so complicated (and, I believe, the axes of correlation are not fully human-chosen) that nobody can say why that happened.
Welp. I can't actually access the link to the interview for reasons and stuff. But here are my main thoughts on AI & D&D:
I used to think that AI shouldn't be used at all because it was created unethically, but now I really dunno about that. For me, the most important hope I have is that Wotzy doe sn't use generative AI to replace their workers, to write their books, or to take away opportunities from all of us homebrewers. Unfortunately, I worry they'll eventually reduce part of their workforce as they let AI "transform" their workplace into a zone for more money and less human creativity. But who knows.
Also, I generally agree with what Caerwyn said HERE. ^
And they are legally obligated to provide the ROI, not create an ethical "ai" tool for the end user.
The ethics of AI tools (in any context) will be decided in a courtroom, not a D&D forum. Until then all we have is the endless back-and-forth of personal opinion.
Don’t forget about Ai DM downtime to update software, glitches, crashes, lag, server maintenance, debugging, data attacks, and a host of other who knows what that could affect the system in ways one could only imagine.
Then you have the possibility the Ai flips out at a player, begins to speak gibberish and tries to influence the end user in ways some might think or believe is unethical, and for whatever reason that player goes screw this I can find better ( or worse), then what?
Lastly, IMO when would such an Ai DM become a pay to play service? Gotta pay the electric bills on those servers and equipment maintenance cost, and them things are not cheap in the slightest.
Literally all of these apply to human DMs too though. Humans need far more downtime than any machine, especially for leisure activities that need to be juxtaposed to mandatory obligations of work and family, to say nothing of the downtime issues that accompany both virtual and in-person play. Human DMs make mistakes and cause issues at the table too. And if DMing itself is their job, the only way for that to be feasible long-term is pay-to-play.
Why is Hasbo/WoTC the one company that shouldn't be investigating the potential uses of AI? The company has to think about its investors', consumers', and employees' (current and future) interest in how AI can service them. Competitors of theirs are surely paying attention and exploring its use.
I’d rather be offed an Ai ASSISTANT then a bot that tries to run a game. IMHO
1) Chris Cocks didn't say anything about "a bot that tries to run a game" so you should be good. 2) In order to do either an ASSISTANT or a bot, they will still have to investigate AI uses in a D&D context, so petepan3 is still correct.
And they are legally obligated to provide the ROI, not create an ethical "ai" tool for the end user.
The ethics of AI tools (in any context) will be decided in a courtroom, not a D&D forum. Until then all we have is the endless back-and-forth of personal opinion.
A courtroom is only necessary if unethical means are used. No endless back and forth needed. Until WotC releases "ai" tools everyone is speculating, will they create it in house, license something they don't know how or why it does what it does?, anyone's guess right now.
Just read any thread on this site about rules RAW or RAI posted on this site and imagine having "ai" join the discussion, that is not something I want in my games. Heck, without the "ai" most of it I wouldn't want in my games.
And they are legally obligated to provide the ROI, not create an ethical "ai" tool for the end user.
The ethics of AI tools (in any context) will be decided in a courtroom, not a D&D forum. Until then all we have is the endless back-and-forth of personal opinion.
No, the legalities of AI will be decided in a courtroom. The discovery process may provide useful input on ethical issues, but the legal system isn't designed to solve ethical problems.
And they are legally obligated to provide the ROI, not create an ethical "ai" tool for the end user.
The ethics of AI tools (in any context) will be decided in a courtroom, not a D&D forum. Until then all we have is the endless back-and-forth of personal opinion.
No, the legalities of AI will be decided in a courtroom. The discovery process may provide useful input on ethical issues, but the legal system isn't designed to solve ethical problems.
Yup. Courts are designed to care about law and not morality (though practically, humans end up caring about both).
No, the legalities of AI will be decided in a courtroom. The discovery process may provide useful input on ethical issues, but the legal system isn't designed to solve ethical problems.
Fine - but D&D forums aren't designed to solve them either. So you'll forgive me if I don't put much stock in what random posters consider ethical.
There is a place for AI in D&D. People need to admit this.
The problem is that the people who want to push AI on us don’t actually create for D&D.
AI Art, the ability to generate sounds, getting the AI to translate old written content to a more modern and readable format… These are all AI tools that I use daily in my projects.
But the people who want to use AI in D&D want it to be used as a way to take away the creative part of a DM’s job. They want to replace the human DM with an AI. It is scary not because of what it is, but because younger generations will come to think that a human DM isn’t needed.
But the people who want to use AI in D&D want it to be used as a way to take away the creative part of a DM’s job. They want to replace the human DM with an AI. It is scary not because of what it is, but because younger generations will come to think that a human DM isn’t needed.
The reality is that no one in Wizards has ever talked about using AI to replace human DMs. To the contrary, as D&D’s digital footprint has expanded, Wizards has been pretty clear that they understand the role of the traditional DM is important to both players and the health of the game.
Right now, there only exists two real points of data about Wizards’ long-term AI plans. There is this interview - which, not only does not say anything about Wizards using AIs to replace human creativity, but specifically has the CEO of Hasbro talk about ways he uses AI to bring his creativity to D&D to life.
The second is Wizards’ AI art policy - where they have specifically prohibited their artists from using AI, drafting a policy which ensures the human creative element remains alive and well in D&D’s publications.
Everything else is just noise and speculation - or people outside Wizards/Hasbro who have their own ideas about how AI should be used. But, within Wizards itself, all present indications suggest the exact opposite of what your post claims - that the people making D&D want to use AI in a way which assists human DMs actualize their creative vision, rather than supplant human creativity.
It’s very nice to think about but you have to look at it from a capitalist point of view. I fell for the same nice guy tactics that Nintendo used.
Don’t fall for it. It’s part of what’s wrong with… everything.
Wild speculation based on your own personal misgivings remains wild speculation. I will be sticking with actual evidence, not worrying over things which might happen - and the actual evidence discounts pretty much every part of your speculation, even from the capitalistic point of view.
There are capitalistic realities which suggest your fear-mongering is of little probative value. FD&D has long acknowledged that their entire business plan heavily involves leaning on the 20% or so of players who DM. DMs are not only the ones who run the game - they are the ones who actually buy product. Wizards, and TSR before them, have spent decades trying to get more players to purchase things for D&D… no matter what they do, the numbers have stayed about the same, with only 20% of players contributing financially to the game.
They also are well aware that their players’ creativity is far more important than Wizards’. Since the game’s inception, the most common setting has been “homebrew” followed by “homebrewed version of official words.” Wizards collects a whole lot of data through surveys, sales, and other sources - they know full well that DM creativity is the far greater driving force behind the game than Wizards’ official content.
In capitalistic (and legally binding) financial talks, Wizards has repeatedly acknowledged the importance of keeping the key financial demographic of human creators happy—they want to expand beyond that, but they have also acknowledged they want to expand in a way which will not alienate the core purchase.
Wizards’ capitalistic nature is the strongest argument that they will not try to subjugate DM creativity to an AI - they know where their money comes from and are not going to kill the golden goose. They are far less likely to push for a paradigm shift in how the game is played than they are to try and create systems which augment a business model which has worked incredibly well.
I don’t know if this has come up before or not, but you know what would be a cool application of AI for D&D? An art generator trained on WotC’s art for the playable races that you can use to generate character portraits. If they keep the art in-house they can hardly be accused of stealing, and it would make it easier to get something like a Goliath or Genasi or Thri-Keen than one trained on general terms. No idea if it’s practical, but it’d be something they could market to both sides of the table.
I don’t know if this has come up before or not, but you know what would be a cool application of AI for D&D? An art generator trained on WotC’s art for the playable races that you can use to generate character portraits. If they keep the art in-house they can hardly be accused of stealing, and it would make it easier to get something like a Goliath or Genasi or Thri-Keen than one trained on general terms. No idea if it’s practical, but it’d be something they could market to both sides of the table.
Nowhere near enough art to do it with AI. They could do it without AI, in exactly the way video games do it, and I suspect they're working on it for the VTT.
Rollback Post to RevisionRollBack
To post a comment, please login or register a new account.
Yes, but you don't get to see behind the curtain so to speak thus we have no idea why it "picks" what to use or discard and why. The why is very important, and it is closely guarded.
CENSORSHIP IS THE TOOL OF COWARDS and WANNA BE TYRANTS.
They're neural net programs, no-one knows why. It doesn't pick or discard at all, it just ingests everything.
People do know why. There are plenty examples of it "making" choices and pushing things that are both incorrect and illogical, this is not an area of discussion I am willing to get into any further on this site.
CENSORSHIP IS THE TOOL OF COWARDS and WANNA BE TYRANTS.
In many cases, the “why” of AI is less “closely guarded” and more “actually unknowable.” A well-documented issue with AI is the black box problem - the idea that, once an AI gets going, even its inventors have know way of knowing how it reaches a decision. Essentially, you can tell what was input into the AI and you can tell what it spits out, but no one really tell you how the AI went from point A to point B.
You are correct that AIs can spit out completely bonkers point Bs, but even their inventors often do not know why the AI does that. Often, the best the creators can really do is try to retrain the AI with data that comports with the desired output and hope that, whatever happens in the black box, makes choices from the new data rather than whatever old data set caused wrong answers.
If people knew why it was doing that, it wouldn't be doing it.
They know why in the general case -- LLMs don't "know" anything; they track word correlations.
But if you want to know about any specific thing it outputs, the correlations are so complicated (and, I believe, the axes of correlation are not fully human-chosen) that nobody can say why that happened.
Welp. I can't actually access the link to the interview for reasons and stuff. But here are my main thoughts on AI & D&D:
BoringBard's long and tedious posts somehow manage to enrapture audiences. How? Because he used Charm Person, the #1 bard spell!
He/him pronouns. Call me Bard. PROUD NERD!
Ever wanted to talk about your parties' worst mistakes? Do so HERE. What's your favorite class, why? Share & explain
HERE.The ethics of AI tools (in any context) will be decided in a courtroom, not a D&D forum. Until then all we have is the endless back-and-forth of personal opinion.
Literally all of these apply to human DMs too though. Humans need far more downtime than any machine, especially for leisure activities that need to be juxtaposed to mandatory obligations of work and family, to say nothing of the downtime issues that accompany both virtual and in-person play. Human DMs make mistakes and cause issues at the table too. And if DMing itself is their job, the only way for that to be feasible long-term is pay-to-play.
1) Chris Cocks didn't say anything about "a bot that tries to run a game" so you should be good.
2) In order to do either an ASSISTANT or a bot, they will still have to investigate AI uses in a D&D context, so petepan3 is still correct.
A courtroom is only necessary if unethical means are used. No endless back and forth needed. Until WotC releases "ai" tools everyone is speculating, will they create it in house, license something they don't know how or why it does what it does?, anyone's guess right now.
Just read any thread on this site about rules RAW or RAI posted on this site and imagine having "ai" join the discussion, that is not something I want in my games. Heck, without the "ai" most of it I wouldn't want in my games.
CENSORSHIP IS THE TOOL OF COWARDS and WANNA BE TYRANTS.
No, the legalities of AI will be decided in a courtroom. The discovery process may provide useful input on ethical issues, but the legal system isn't designed to solve ethical problems.
Yup. Courts are designed to care about law and not morality (though practically, humans end up caring about both).
BoringBard's long and tedious posts somehow manage to enrapture audiences. How? Because he used Charm Person, the #1 bard spell!
He/him pronouns. Call me Bard. PROUD NERD!
Ever wanted to talk about your parties' worst mistakes? Do so HERE. What's your favorite class, why? Share & explain
HERE.Fine - but D&D forums aren't designed to solve them either. So you'll forgive me if I don't put much stock in what random posters consider ethical.
This thread has deviated from the original topic, or D&D in general. Please remember to stay on topic
Find my D&D Beyond articles here
There is a place for AI in D&D. People need to admit this.
The problem is that the people who want to push AI on us don’t actually create for D&D.
AI Art, the ability to generate sounds, getting the AI to translate old written content to a more modern and readable format… These are all AI tools that I use daily in my projects.
But the people who want to use AI in D&D want it to be used as a way to take away the creative part of a DM’s job. They want to replace the human DM with an AI. It is scary not because of what it is, but because younger generations will come to think that a human DM isn’t needed.
The reality is that no one in Wizards has ever talked about using AI to replace human DMs. To the contrary, as D&D’s digital footprint has expanded, Wizards has been pretty clear that they understand the role of the traditional DM is important to both players and the health of the game.
Right now, there only exists two real points of data about Wizards’ long-term AI plans. There is this interview - which, not only does not say anything about Wizards using AIs to replace human creativity, but specifically has the CEO of Hasbro talk about ways he uses AI to bring his creativity to D&D to life.
The second is Wizards’ AI art policy - where they have specifically prohibited their artists from using AI, drafting a policy which ensures the human creative element remains alive and well in D&D’s publications.
Everything else is just noise and speculation - or people outside Wizards/Hasbro who have their own ideas about how AI should be used. But, within Wizards itself, all present indications suggest the exact opposite of what your post claims - that the people making D&D want to use AI in a way which assists human DMs actualize their creative vision, rather than supplant human creativity.
It’s very nice to think about but you have to look at it from a capitalist point of view. I fell for the same nice guy tactics that Nintendo used.
Don’t fall for it. It’s part of what’s wrong with… everything.
For example, if I was an investor in WotC, I would be pulling out if they were refusing to use AI in their projects. I just would. It sucks…
From a capitalist point of view, Wizards of the Coast sells most of its books to DMs, so why would they want fewer of them?
Wild speculation based on your own personal misgivings remains wild speculation. I will be sticking with actual evidence, not worrying over things which might happen - and the actual evidence discounts pretty much every part of your speculation, even from the capitalistic point of view.
There are capitalistic realities which suggest your fear-mongering is of little probative value. FD&D has long acknowledged that their entire business plan heavily involves leaning on the 20% or so of players who DM. DMs are not only the ones who run the game - they are the ones who actually buy product. Wizards, and TSR before them, have spent decades trying to get more players to purchase things for D&D… no matter what they do, the numbers have stayed about the same, with only 20% of players contributing financially to the game.
They also are well aware that their players’ creativity is far more important than Wizards’. Since the game’s inception, the most common setting has been “homebrew” followed by “homebrewed version of official words.” Wizards collects a whole lot of data through surveys, sales, and other sources - they know full well that DM creativity is the far greater driving force behind the game than Wizards’ official content.
In capitalistic (and legally binding) financial talks, Wizards has repeatedly acknowledged the importance of keeping the key financial demographic of human creators happy—they want to expand beyond that, but they have also acknowledged they want to expand in a way which will not alienate the core purchase.
Wizards’ capitalistic nature is the strongest argument that they will not try to subjugate DM creativity to an AI - they know where their money comes from and are not going to kill the golden goose. They are far less likely to push for a paradigm shift in how the game is played than they are to try and create systems which augment a business model which has worked incredibly well.
I don’t know if this has come up before or not, but you know what would be a cool application of AI for D&D? An art generator trained on WotC’s art for the playable races that you can use to generate character portraits. If they keep the art in-house they can hardly be accused of stealing, and it would make it easier to get something like a Goliath or Genasi or Thri-Keen than one trained on general terms. No idea if it’s practical, but it’d be something they could market to both sides of the table.
Nowhere near enough art to do it with AI. They could do it without AI, in exactly the way video games do it, and I suspect they're working on it for the VTT.