So, Cocks claimed that they'd be able to do it with their own IP since they have so much of it and to be honest, they'd have to anyway - the point of the DM is to play D&D, they don't want you to start roaming around Mos Eisley.
Why is that unethical? That's without the debate of why me looking at a picture and copying the style is fine, but an AI doing similar (remember, WotC caught flak because an artist used AI to do touch ups, not to create the artwork in the first place), but why is it inherently unethical?
As for environmental concerns, it would be interesting to see actual numbers involved, specifically how much it would increase things like water consumption compared to what we're using now by using DDB.
WotC doing What you describe in that vacuum may or may not have ethical concerns depending on the algorithms used, which are commonly considered proprietary so very few know exactly how they will use the data the are trained on. They can use methods that would be illegal if they had to disclose what is in them.
The environmental concerns are the same as any large scale data center/"bitcoin" mining operation. Which are substantial to say the least.
Rollback Post to RevisionRollBack
CENSORSHIP IS THE TOOL OF COWARDS and WANNA BE TYRANTS.
I believe the thinking goes any AI toolkits WotC would have access to are considered unethical because all existing AI tools were trained unethically. Not what the AI "knows" or can search out in terms of content but what it "knows" about in terms of how to search out a response, the to be really rough with analogies, the AI's operating system is derived through unethical development. To use another discipline's analogy, the ability for an AI to process even WotC's "pure" data is itself the fruit of a poisonous tree and is thus unethical.
To get out of this conundrum WotC would have to basically develop its own AI tools from scratch before it even teaches the AI WotC content, and Hasbro does not have that sort of cash laying round (which is why it's leaning into AI for content in the first place, no executive gives a flying fig about liberating creativity or whatever blather you see on Twitter, this "revolution" is about the elimination of the cost of labor, but that goes into broader systems of ethics outside the scope of your inquiry).
The environmental concerns are the same as any large scale data center/"bitcoin" mining operation. Which are substantial to say the least.
Unlike bitcoin however, AI has worthwhile applications beyond being a vehicle for speculation or money laundering; its environmental footprint isn't the result of banks of processors competing to solve redundant math problems purely to generate more of itself. That includes its use to support fields where leisure and artistic pursuits (yes, artistic) intersect, such as D&D.
AI has no creativity. Sorry if I’m insulting anyone here, but I’m a fledgling writer and DM. AI covers both of those, and I feel a bit threatened. If we have AI make all our entertainment for us, it’ll be a very bleak future. And DnD and reading affect who we become in our later years. If that’s affected by machines… I can’t imagine what might happen.
Rollback Post to RevisionRollBack
My titles are the great Silver Dragon Lord of the Sky, Second in Command of the Dragon Cult, High Warlock of Cynophobia, High Cultist of Jeff, The Lightning Mage. I’m a ✨Chronically online teenage boy✨, and one of the most active posters on the forums (MORE THAN SALEM AND GONZALO). Always open to talk if you’d like to shoot me a PM! Please don’t hesitate to tell me I’m being a jerk or overbearing, I love helpful feedback! Love y’all!
I can see an AI DM down the line. Unless a lot more human DMs arrive on the scene suddenly, and I don't see that happening any time soon. They have made playing more fun than DMing, or it always was. It might take some work and trail runs to get all the bugs out but it could be a thing one day.
Honestly, I don’t see AI DMs really being a thing until we get a massive paradigm shift in how AI function. An AI cannot actually think critically and consider “does this make sense” or “how do I adapt to this unforeseen situation or interaction”? It just stitches together a response that an algorithm determines to be most likely to be positively received. If you look over some examples of AI generated character backstories, you can typically pick out numerous inconsistencies or contradictions. It’s a useful tool, but it’s about as ready to stand in for a DM as cruise control is to stand in for a driver.
Beyond that, the AI can't
Consider what is best for the campaign
Consider and respond to the different playstyles, needs or goals of players
Work with a player on a good side plot
Like... The whole point of D&D in the modern age is collective storytelling and human interaction. An AI simply can't do that.
If you actually read the interview, at no point does Cocks say that he expects AI DMs to be a thing. There are only two real statements about likely use of AI, and they both are around tools.
In talking about what Wizards can do
We can build tools that aid in content creation for users or create really interesting gamified scenarios around them.
The first is obviously things like AI image generators. The second is unclear, but "gamified scenarios" probably means things that work conveniently as games, which I suspect means map/scenario generation (produce a map, throw some monsters on the map, maybe also traps, etc) as combat is the most gamist component of D&D.
The other is talking about personal experience using AI
That’s going to be the mindset that you have to remember as you evolve for the brave new world of tools that we have coming out. I use AI in building out my D&D campaigns. I play D&D three or four times a month with my friends. I’m horrible at art. I don’t commercialize anything I do. It doesn’t have anything to do with work. But what I’m able to accomplish with the Bing image creator, or talking to ChatGPT, it really delights my middle-aged friends when I do a Roll20 campaign or a D&D Beyond campaign and I put some PowerPoints together on a TV and call it an interactive map.
which is, again, using AI to fill in gaps, not be the DM.
Now, this doesn't mean that there isn't potential for problems there, but its valuable to try to criticize what people are actually saying, not what you imagine them saying.
The environmental concerns are the same as any large scale data center/"bitcoin" mining operation. Which are substantial to say the least.
Unlike bitcoin however, AI has worthwhile applications beyond being a vehicle for speculation or money laundering; its environmental footprint isn't the result of banks of processors competing to solve redundant math problems purely to generate more of itself. That includes its use to support fields where leisure and artistic pursuits (yes, artistic) intersect, such as D&D.
How "worth while" really gets back to the ethics and algorithms used, but yes there is a difference in the why, but not much, they all are looking for a high ROI for their investors not the "good" for the users.
Rollback Post to RevisionRollBack
CENSORSHIP IS THE TOOL OF COWARDS and WANNA BE TYRANTS.
How "worth while" really gets back to the ethics and algorithms used, but yes there is a difference in the why, but not much, they all are looking for a high ROI for their investors not the "good" for the users.
"High ROI for investors" would be D&D's objective whether AI was involved with it or not. It's a product made by a for-profit corporation.
AI is, in both the literal meaning of the word and as a D&D metaphor, a mimic
Rollback Post to RevisionRollBack
Active characters:
Carric Aquissar, elven wannabe artist in his deconstructionist period (Archfey warlock) Lan Kidogo, mapach archaeologist and treasure hunter (Knowledge cleric) Mardan Ferres, elven private investigator obsessed with that one unsolved murder (Assassin rogue) Xhekhetiel, halfling survivor of a Betrayer Gods cult (Runechild sorcerer/fighter)
How "worth while" really gets back to the ethics and algorithms used, but yes there is a difference in the why, but not much, they all are looking for a high ROI for their investors not the "good" for the users.
"High ROI for investors" would be D&D's objective whether AI was involved with it or not. It's a product made by a for-profit corporation.
And they are legally obligated to provide the ROI, not create an ethical "ai" tool for the end user.
Rollback Post to RevisionRollBack
CENSORSHIP IS THE TOOL OF COWARDS and WANNA BE TYRANTS.
Do any video games out today get as close to a DM as we would want?
Do they create content like good DM?
Do they run and mission as good as a DM?
Can they run a campaign even close to a real live DM?
hundreds of millions in investments and 20 years of development and we still feel like we are in a rail road campaign. Being guided into a specific direction. Oh they could ad a bunch of side missions or even make a dozen core missions but in the end we all know the outcome of most of them.
Now ad in a second player and the whole system starts to break down with conflicting decisions from multiple payers in the same game/team.
The Ai can only be as good as the sources it uses and from what the AI systems are putting out lately I would hate to see what data sources they are using.
Do any video games out today get as close to a DM as we would want?
Do they create content like good DM?
Do they run and mission as good as a DM?
Can they run a campaign even close to a real live DM?
hundreds of millions in investments and 20 years of development and we still feel like we are in a rail road campaign. Being guided into a specific direction. Oh they could ad a bunch of side missions or even make a dozen core missions but in the end we all know the outcome of most of them.
Now ad in a second player and the whole system starts to break down with conflicting decisions from multiple payers in the same game/team.
The Ai can only be as good as the sources it uses and from what the AI systems are putting out lately I would hate to see what data sources they are using.
As I said in my post on the first page, we've reached the point where AIs are starting to be trained on content that was produced by AIs.
Rollback Post to RevisionRollBack
Find your own truth, choose your enemies carefully, and never deal with a dragon.
"Canon" is what's factual to D&D lore. "Cannon" is what you're going to be shot with if you keep getting the word wrong.
Do any video games out today get as close to a DM as we would want?
Do they create content like good DM?
Do they run and mission as good as a DM?
Can they run a campaign even close to a real live DM?
hundreds of millions in investments and 20 years of development and we still feel like we are in a rail road campaign. Being guided into a specific direction. Oh they could ad a bunch of side missions or even make a dozen core missions but in the end we all know the outcome of most of them.
Now ad in a second player and the whole system starts to break down with conflicting decisions from multiple payers in the same game/team.
The Ai can only be as good as the sources it uses and from what the AI systems are putting out lately I would hate to see what data sources they are using.
As I said in my post on the first page, we've reached the point where AIs are starting to be trained on content that was produced by AIs.
That does explain some experiences, especially when the Ai goes f’it here’s a human.
So, Cocks claimed that they'd be able to do it with their own IP since they have so much of it and to be honest, they'd have to anyway - the point of the DM is to play D&D, they don't want you to start roaming around Mos Eisley.
Why is that unethical? That's without the debate of why me looking at a picture and copying the style is fine, but an AI doing similar (remember, WotC caught flak because an artist used AI to do touch ups, not to create the artwork in the first place), but why is it inherently unethical?
As for environmental concerns, it would be interesting to see actual numbers involved, specifically how much it would increase things like water consumption compared to what we're using now by using DDB.
WotC doing What you describe in that vacuum may or may not have ethical concerns depending on the algorithms used, which are commonly considered proprietary so very few know exactly how they will use the data the are trained on. They can use methods that would be illegal if they had to disclose what is in them.
Ok, don't take this the wrong way, because I am sensitive to ethical concerns, I'm just raising an eyebrow at the special case that seems to be being made for AI.
Unethical behaviour is endemic to society, at least certainly in the business world. Anyone who hasn't seen it, is either willfully looking away or just hasn't been affected by it...yet. There are even those on this thread that have been quite dismissive when I (and others) have pointed out examples of unethical behaviours of WotC. Go into a shop that belongs to a chain. I can almost guarantee that unethical stuff is going on, even if you can't see it. The market practically forces that behaviour.
But nobody cares, not long enough to actually think about it. But because WotC might do something that might be unethical with AI, despite their express claims to the contrary and we can't audit that...it's a substantial problem?
The base.concern, that AI uses "stolen" IP, is a lot greyer than some are claiming. We're now getting to "I'm not happy because there's a possibility that it's happened even when people are expressly saying otherwise". As I said, it seems a special case.
The environmental concerns are the same as any large scale data center/"bitcoin" mining operation. Which are substantial to say the least.
The impact is relative though. Let's look at water usage, since that was explicitly mentioned. How much would it cause DDB to expand its water footprint? A fraction of it's current size? Double? Hundredfold?
If it's a hundredfold, then yes, we'd need to be very careful about it. If it's a fraction...then I hope those complaining are vegetarians, because you can save a lot more water by eating vegetarian rather than meat based meals. You can have your water footprint just by making that change, and that's a lot more significant than what using DDB does currently (presumably, if using DDB really does use that much water that it rivals meat, then we need to be having a discussion about using DDB v physical books). It rings a little hollow if they're fretting about about a couple of litres once a week, give or take, on AI whilst wasting literally hundreds every day on a diet choice. If it's only a fraction, there are a lot more meaningful ways of reducing water usage. Then there's the spectrum in between.
Which is why scale is important to the discussion.
If you're not willing or able to to discuss in good faith, then don't be surprised if I don't respond, there are better things in life for me to do than humour you. This signature is that response.
Do any video games out today get as close to a DM as we would want?
Do they create content like good DM?
Do they run and mission as good as a DM?
Can they run a campaign even close to a real live DM?
hundreds of millions in investments and 20 years of development and we still feel like we are in a rail road campaign. Being guided into a specific direction. Oh they could ad a bunch of side missions or even make a dozen core missions but in the end we all know the outcome of most of them.
Now ad in a second player and the whole system starts to break down with conflicting decisions from multiple payers in the same game/team.
The Ai can only be as good as the sources it uses and from what the AI systems are putting out lately I would hate to see what data sources they are using.
The thing is, AI is now developing incredibly fast. I first started using ChatGPT (I say use...I don't really use it, I just experimented with it to see what it could do, I've done nothing actually productive with it) a year ago and it was shockingly awful. Now, it's actually capable of talking with me, understanding what I'm saying and giving meaningful responses. It used to just spit our garbage, but now it mostly gets things more or less right (albeit with the occasional curveball). It's now at the point where I can get more sense out of it than the average human on the internet.
Sure, it's not where it needs to be at to DM a game for us. However, it is improving, and it's improving rapidly. We'll see AIs capable of providing a decent DM experience within years. I'll be shocked if it isn't developed within two decades. I wouldn't be surprised if it's commonplace by the end of this decade.
Rollback Post to RevisionRollBack
If you're not willing or able to to discuss in good faith, then don't be surprised if I don't respond, there are better things in life for me to do than humour you. This signature is that response.
The thing is, AI is now developing incredibly fast. I first started using ChatGPT (I say use...I don't really use it, I just experimented with it to see what it could do, I've done nothing actually productive with it) a year ago and it was shockingly awful. Now, it's actually capable of talking with me, understanding what I'm saying and giving meaningful responses. It used to just spit our garbage, but now it mostly gets things more or less right (albeit with the occasional curveball). It's now at the point where I can get more sense out of it than the average human on the internet.
Sure, it's not where it needs to be at to DM a game for us. However, it is improving, and it's improving rapidly. We'll see AIs capable of providing a decent DM experience within years. I'll be shocked if it isn't developed within two decades. I wouldn't be surprised if it's commonplace by the end of this decade.
We're not going to see viable AI DMs without a paradigm shift in the tech. Generative models can't do it; it's not what they're made for.
They are, at the core, designed to output words that are probably the correct words to continue the output, based on their initial model and the text so far. That's all they do.
They can't know what rules are, or enforce them. They're just more text to guide the future text. If a wizard says "I cast fireball", they're likely to try to describe a fireball being cast, because, in their training data, when a wizard says they cast fireball, they can cast it. The fact that this wizard is first level is unlikely to enter into it.
They are unlikely to ever be able to cleanly keep multiple people and their input differentiated.
They don't have any ability to differentiate between types of information. They can't keep secrets. Even the preloaded prompts that they're supposed to hide are regularly extracted by users who are trying. Anything the model comes up with on the fly is going to have no such protection. What's the Big Bad up to? You'll likely find out as soon as you start inquiring.
They can't plan. No matter how improvisational the GM, you're not coming up with everything on the spur of the moment. Sometimes there's a thing that can follow from what's happened, and you take steps to set it up. Generative models can't; they operate entirely in the moment.
Like... The whole point of D&D in the modern age is collective storytelling and human interaction. An AI simply can't do that.
The human interaction part is the difficult part of an AI, and very well might never be possible. For example, I have one player who is a tad emotionally soft. Things I can get away with for my other players would cross a line for her. This both gives me, as the DM, a weapon and a pitfall - there is a balancing act of pushing on that player to manipulate this weakness, while also being careful to watch their mannerisms to make sure the player is not broken.
AI experts (which, ironically, this particular player happens to be), doubt we may ever be able to create an AI that can do that level of empathetic, individual-by-individual analysis.
But, you know what, neither can many people - just look at this thread itself, born of an individual who did not apply even a basic level of critical reading to an interview. Or any number of threads on this site where folks could have avoided their issues by using a little empathy and common sense.
Overall, an AI DM may never be perfect - but, for a lot of players, an imperfect DM is better than no DM at all. We have five decades of data showing players will put up with incompetent, rude, directionless, forced on the rails, and/or unempathetic, DMs if it means they get to play the game. Additionally, people have been enjoying video games or games like Gloomhaven which try to recreate the D&D experience for years - those kinds of games tend to sell very well and are often among the top-rated in their genera, so there clearly is a tolerance for a limited storyteller.
I expect an AI DM will be able to stretch the basic itch in a way which gives folks the basic level of satisfaction. Perhaps it is not going to weave a rich tapestry of love and death, victory and heartache; perhaps it will never be able to play on players’ particular quirks, bending them just far enough so they do not break. But, even if it never reaches the level that I expect from a DM, we are not too far away from it being able to replicate an experience sufficient for those willing to settle.
Don’t forget about Ai DM downtime to update software, glitches, crashes, lag, server maintenance, debugging, data attacks, and a host of other who knows what that could affect the system in ways one could only imagine.
Then you have the possibility the Ai flips out at a player, begins to speak gibberish and tries to influence the end user in ways some might think or believe is unethical, and for whatever reason that player goes screw this I can find better ( or worse), then what?
Lastly, IMO when would such an Ai DM become a pay to play service? Gotta pay the electric bills on those servers and equipment maintenance cost, and them things are not cheap in the slightest.
Don’t forget about Ai DM downtime to update software, glitches, crashes, lag, server maintenance, debugging, data attacks, and a host of other who knows what that could affect the system in ways one could only imagine.
This is true of playing on Zoom, or DDB, anyway.
Then you have the possibility the Ai flips out at a player, begins to speak gibberish and tries to influence the end user in ways some might think or believe is unethical, and for whatever reason that player goes screw this I can find better ( or worse), then what?
The former can happen, though not normally with modern generative models.
The latter, not so much. They can't "try to influence the user", because they have no intent. Can they be poked into spewing out inappropriate things? Absolutely. It's one of the many problems of the unbelievably huge training sets needed. But if you're not trying, it's pretty unlikely as I understand it, unless somebody is one of those people who finds acknowledging the existence of gay people to be unacceptable brainwashing or the like.
Lastly, IMO when would such an Ai DM become a pay to play service? Gotta pay the electric bills on those servers and equipment maintenance cost, and them things are not cheap in the slightest.
It'd have to be a pay service. These things are expensive to run and, incredibly expensive to train. Currently, their business model involves burning vast amounts of venture capitol money, but the VCs are eventually going to want a payday.
Rollback Post to RevisionRollBack
To post a comment, please login or register a new account.
WotC doing What you describe in that vacuum may or may not have ethical concerns depending on the algorithms used, which are commonly considered proprietary so very few know exactly how they will use the data the are trained on. They can use methods that would be illegal if they had to disclose what is in them.
The environmental concerns are the same as any large scale data center/"bitcoin" mining operation. Which are substantial to say the least.
CENSORSHIP IS THE TOOL OF COWARDS and WANNA BE TYRANTS.
I believe the thinking goes any AI toolkits WotC would have access to are considered unethical because all existing AI tools were trained unethically. Not what the AI "knows" or can search out in terms of content but what it "knows" about in terms of how to search out a response, the to be really rough with analogies, the AI's operating system is derived through unethical development. To use another discipline's analogy, the ability for an AI to process even WotC's "pure" data is itself the fruit of a poisonous tree and is thus unethical.
To get out of this conundrum WotC would have to basically develop its own AI tools from scratch before it even teaches the AI WotC content, and Hasbro does not have that sort of cash laying round (which is why it's leaning into AI for content in the first place, no executive gives a flying fig about liberating creativity or whatever blather you see on Twitter, this "revolution" is about the elimination of the cost of labor, but that goes into broader systems of ethics outside the scope of your inquiry).
Jander Sunstar is the thinking person's Drizzt, fight me.
Hasbro is very, very small potatoes in this particular conflict.
Unlike bitcoin however, AI has worthwhile applications beyond being a vehicle for speculation or money laundering; its environmental footprint isn't the result of banks of processors competing to solve redundant math problems purely to generate more of itself. That includes its use to support fields where leisure and artistic pursuits (yes, artistic) intersect, such as D&D.
AI has no creativity. Sorry if I’m insulting anyone here, but I’m a fledgling writer and DM. AI covers both of those, and I feel a bit threatened. If we have AI make all our entertainment for us, it’ll be a very bleak future. And DnD and reading affect who we become in our later years. If that’s affected by machines… I can’t imagine what might happen.
My titles are the great Silver Dragon Lord of the Sky, Second in Command of the Dragon Cult, High Warlock of Cynophobia, High Cultist of Jeff, The Lightning Mage. I’m a ✨Chronically online teenage boy✨, and one of the most active posters on the forums (MORE THAN SALEM AND GONZALO). Always open to talk if you’d like to shoot me a PM! Please don’t hesitate to tell me I’m being a jerk or overbearing, I love helpful feedback! Love y’all!
Extended Signature!
Beyond that, the AI can't
Like... The whole point of D&D in the modern age is collective storytelling and human interaction. An AI simply can't do that.
If you actually read the interview, at no point does Cocks say that he expects AI DMs to be a thing. There are only two real statements about likely use of AI, and they both are around tools.
In talking about what Wizards can do
The first is obviously things like AI image generators. The second is unclear, but "gamified scenarios" probably means things that work conveniently as games, which I suspect means map/scenario generation (produce a map, throw some monsters on the map, maybe also traps, etc) as combat is the most gamist component of D&D.
The other is talking about personal experience using AI
which is, again, using AI to fill in gaps, not be the DM.
Now, this doesn't mean that there isn't potential for problems there, but its valuable to try to criticize what people are actually saying, not what you imagine them saying.
How "worth while" really gets back to the ethics and algorithms used, but yes there is a difference in the why, but not much, they all are looking for a high ROI for their investors not the "good" for the users.
CENSORSHIP IS THE TOOL OF COWARDS and WANNA BE TYRANTS.
"High ROI for investors" would be D&D's objective whether AI was involved with it or not. It's a product made by a for-profit corporation.
AI is, in both the literal meaning of the word and as a D&D metaphor, a mimic
Active characters:
Carric Aquissar, elven wannabe artist in his deconstructionist period (Archfey warlock)
Lan Kidogo, mapach archaeologist and treasure hunter (Knowledge cleric)
Mardan Ferres, elven private investigator obsessed with that one unsolved murder (Assassin rogue)
Xhekhetiel, halfling survivor of a Betrayer Gods cult (Runechild sorcerer/fighter)
"ai" is, like many hotly debated social topics, just a tool and how that tool is used is more important than what it can do.
CENSORSHIP IS THE TOOL OF COWARDS and WANNA BE TYRANTS.
And they are legally obligated to provide the ROI, not create an ethical "ai" tool for the end user.
CENSORSHIP IS THE TOOL OF COWARDS and WANNA BE TYRANTS.
Do any video games out today get as close to a DM as we would want?
Do they create content like good DM?
Do they run and mission as good as a DM?
Can they run a campaign even close to a real live DM?
hundreds of millions in investments and 20 years of development and we still feel like we are in a rail road campaign. Being guided into a specific direction. Oh they could ad a bunch of side missions or even make a dozen core missions but in the end we all know the outcome of most of them.
Now ad in a second player and the whole system starts to break down with conflicting decisions from multiple payers in the same game/team.
The Ai can only be as good as the sources it uses and from what the AI systems are putting out lately I would hate to see what data sources they are using.
As I said in my post on the first page, we've reached the point where AIs are starting to be trained on content that was produced by AIs.
Find your own truth, choose your enemies carefully, and never deal with a dragon.
"Canon" is what's factual to D&D lore. "Cannon" is what you're going to be shot with if you keep getting the word wrong.
That does explain some experiences, especially when the Ai goes f’it here’s a human.
Ok, don't take this the wrong way, because I am sensitive to ethical concerns, I'm just raising an eyebrow at the special case that seems to be being made for AI.
Unethical behaviour is endemic to society, at least certainly in the business world. Anyone who hasn't seen it, is either willfully looking away or just hasn't been affected by it...yet. There are even those on this thread that have been quite dismissive when I (and others) have pointed out examples of unethical behaviours of WotC. Go into a shop that belongs to a chain. I can almost guarantee that unethical stuff is going on, even if you can't see it. The market practically forces that behaviour.
But nobody cares, not long enough to actually think about it. But because WotC might do something that might be unethical with AI, despite their express claims to the contrary and we can't audit that...it's a substantial problem?
The base.concern, that AI uses "stolen" IP, is a lot greyer than some are claiming. We're now getting to "I'm not happy because there's a possibility that it's happened even when people are expressly saying otherwise". As I said, it seems a special case.
The impact is relative though. Let's look at water usage, since that was explicitly mentioned. How much would it cause DDB to expand its water footprint? A fraction of it's current size? Double? Hundredfold?
If it's a hundredfold, then yes, we'd need to be very careful about it. If it's a fraction...then I hope those complaining are vegetarians, because you can save a lot more water by eating vegetarian rather than meat based meals. You can have your water footprint just by making that change, and that's a lot more significant than what using DDB does currently (presumably, if using DDB really does use that much water that it rivals meat, then we need to be having a discussion about using DDB v physical books). It rings a little hollow if they're fretting about about a couple of litres once a week, give or take, on AI whilst wasting literally hundreds every day on a diet choice. If it's only a fraction, there are a lot more meaningful ways of reducing water usage. Then there's the spectrum in between.
Which is why scale is important to the discussion.
If you're not willing or able to to discuss in good faith, then don't be surprised if I don't respond, there are better things in life for me to do than humour you. This signature is that response.
The thing is, AI is now developing incredibly fast. I first started using ChatGPT (I say use...I don't really use it, I just experimented with it to see what it could do, I've done nothing actually productive with it) a year ago and it was shockingly awful. Now, it's actually capable of talking with me, understanding what I'm saying and giving meaningful responses. It used to just spit our garbage, but now it mostly gets things more or less right (albeit with the occasional curveball). It's now at the point where I can get more sense out of it than the average human on the internet.
Sure, it's not where it needs to be at to DM a game for us. However, it is improving, and it's improving rapidly. We'll see AIs capable of providing a decent DM experience within years. I'll be shocked if it isn't developed within two decades. I wouldn't be surprised if it's commonplace by the end of this decade.
If you're not willing or able to to discuss in good faith, then don't be surprised if I don't respond, there are better things in life for me to do than humour you. This signature is that response.
We're not going to see viable AI DMs without a paradigm shift in the tech. Generative models can't do it; it's not what they're made for.
They are, at the core, designed to output words that are probably the correct words to continue the output, based on their initial model and the text so far. That's all they do.
They can't know what rules are, or enforce them. They're just more text to guide the future text. If a wizard says "I cast fireball", they're likely to try to describe a fireball being cast, because, in their training data, when a wizard says they cast fireball, they can cast it. The fact that this wizard is first level is unlikely to enter into it.
They are unlikely to ever be able to cleanly keep multiple people and their input differentiated.
They don't have any ability to differentiate between types of information. They can't keep secrets. Even the preloaded prompts that they're supposed to hide are regularly extracted by users who are trying. Anything the model comes up with on the fly is going to have no such protection. What's the Big Bad up to? You'll likely find out as soon as you start inquiring.
They can't plan. No matter how improvisational the GM, you're not coming up with everything on the spur of the moment. Sometimes there's a thing that can follow from what's happened, and you take steps to set it up. Generative models can't; they operate entirely in the moment.
The human interaction part is the difficult part of an AI, and very well might never be possible. For example, I have one player who is a tad emotionally soft. Things I can get away with for my other players would cross a line for her. This both gives me, as the DM, a weapon and a pitfall - there is a balancing act of pushing on that player to manipulate this weakness, while also being careful to watch their mannerisms to make sure the player is not broken.
AI experts (which, ironically, this particular player happens to be), doubt we may ever be able to create an AI that can do that level of empathetic, individual-by-individual analysis.
But, you know what, neither can many people - just look at this thread itself, born of an individual who did not apply even a basic level of critical reading to an interview. Or any number of threads on this site where folks could have avoided their issues by using a little empathy and common sense.
Overall, an AI DM may never be perfect - but, for a lot of players, an imperfect DM is better than no DM at all. We have five decades of data showing players will put up with incompetent, rude, directionless, forced on the rails, and/or unempathetic, DMs if it means they get to play the game. Additionally, people have been enjoying video games or games like Gloomhaven which try to recreate the D&D experience for years - those kinds of games tend to sell very well and are often among the top-rated in their genera, so there clearly is a tolerance for a limited storyteller.
I expect an AI DM will be able to stretch the basic itch in a way which gives folks the basic level of satisfaction. Perhaps it is not going to weave a rich tapestry of love and death, victory and heartache; perhaps it will never be able to play on players’ particular quirks, bending them just far enough so they do not break. But, even if it never reaches the level that I expect from a DM, we are not too far away from it being able to replicate an experience sufficient for those willing to settle.
Don’t forget about Ai DM downtime to update software, glitches, crashes, lag, server maintenance, debugging, data attacks, and a host of other who knows what that could affect the system in ways one could only imagine.
Then you have the possibility the Ai flips out at a player, begins to speak gibberish and tries to influence the end user in ways some might think or believe is unethical, and for whatever reason that player goes screw this I can find better ( or worse), then what?
Lastly, IMO when would such an Ai DM become a pay to play service? Gotta pay the electric bills on those servers and equipment maintenance cost, and them things are not cheap in the slightest.
This is true of playing on Zoom, or DDB, anyway.
The former can happen, though not normally with modern generative models.
The latter, not so much. They can't "try to influence the user", because they have no intent. Can they be poked into spewing out inappropriate things? Absolutely. It's one of the many problems of the unbelievably huge training sets needed. But if you're not trying, it's pretty unlikely as I understand it, unless somebody is one of those people who finds acknowledging the existence of gay people to be unacceptable brainwashing or the like.
It'd have to be a pay service. These things are expensive to run and, incredibly expensive to train. Currently, their business model involves burning vast amounts of venture capitol money, but the VCs are eventually going to want a payday.