How "worth while" really gets back to the ethics and algorithms used, but yes there is a difference in the why, but not much, they all are looking for a high ROI for their investors not the "good" for the users.
"High ROI for investors" would be D&D's objective whether AI was involved with it or not. It's a product made by a for-profit corporation.
AI is, in both the literal meaning of the word and as a D&D metaphor, a mimic
Rollback Post to RevisionRollBack
Active characters:
Edoumiaond Willegume "Eddie" Podslee, Vegetanian scholar (College of Spirits bard) Lan Kidogo, mapach archaeologist and treasure hunter (Knowledge cleric) Peter "the Pied Piper" Hausler, human con artist/remover of vermin (Circle of the Shepherd druid) PIPA - Planar Interception/Protection Aeormaton, warforged bodyguard and ex-wizard hunter (Warrior of the Elements monk/Cartographer artificer) Xhekhetiel, halfling survivor of a Betrayer Gods cult (Runechild sorcerer/fighter)
Do any video games out today get as close to a DM as we would want?
Do they create content like good DM?
Do they run and mission as good as a DM?
Can they run a campaign even close to a real live DM?
hundreds of millions in investments and 20 years of development and we still feel like we are in a rail road campaign. Being guided into a specific direction. Oh they could ad a bunch of side missions or even make a dozen core missions but in the end we all know the outcome of most of them.
Now ad in a second player and the whole system starts to break down with conflicting decisions from multiple payers in the same game/team.
The Ai can only be as good as the sources it uses and from what the AI systems are putting out lately I would hate to see what data sources they are using.
Do any video games out today get as close to a DM as we would want?
Do they create content like good DM?
Do they run and mission as good as a DM?
Can they run a campaign even close to a real live DM?
hundreds of millions in investments and 20 years of development and we still feel like we are in a rail road campaign. Being guided into a specific direction. Oh they could ad a bunch of side missions or even make a dozen core missions but in the end we all know the outcome of most of them.
Now ad in a second player and the whole system starts to break down with conflicting decisions from multiple payers in the same game/team.
The Ai can only be as good as the sources it uses and from what the AI systems are putting out lately I would hate to see what data sources they are using.
As I said in my post on the first page, we've reached the point where AIs are starting to be trained on content that was produced by AIs.
Rollback Post to RevisionRollBack
Find your own truth, choose your enemies carefully, and never deal with a dragon.
"Canon" is what's factual to D&D lore. "Cannon" is what you're going to be shot with if you keep getting the word wrong.
Do any video games out today get as close to a DM as we would want?
Do they create content like good DM?
Do they run and mission as good as a DM?
Can they run a campaign even close to a real live DM?
hundreds of millions in investments and 20 years of development and we still feel like we are in a rail road campaign. Being guided into a specific direction. Oh they could ad a bunch of side missions or even make a dozen core missions but in the end we all know the outcome of most of them.
Now ad in a second player and the whole system starts to break down with conflicting decisions from multiple payers in the same game/team.
The Ai can only be as good as the sources it uses and from what the AI systems are putting out lately I would hate to see what data sources they are using.
As I said in my post on the first page, we've reached the point where AIs are starting to be trained on content that was produced by AIs.
That does explain some experiences, especially when the Ai goes f’it here’s a human.
Rollback Post to RevisionRollBack
Place dental impression upon the metallic gluteus Maximus.
So, Cocks claimed that they'd be able to do it with their own IP since they have so much of it and to be honest, they'd have to anyway - the point of the DM is to play D&D, they don't want you to start roaming around Mos Eisley.
Why is that unethical? That's without the debate of why me looking at a picture and copying the style is fine, but an AI doing similar (remember, WotC caught flak because an artist used AI to do touch ups, not to create the artwork in the first place), but why is it inherently unethical?
As for environmental concerns, it would be interesting to see actual numbers involved, specifically how much it would increase things like water consumption compared to what we're using now by using DDB.
WotC doing What you describe in that vacuum may or may not have ethical concerns depending on the algorithms used, which are commonly considered proprietary so very few know exactly how they will use the data the are trained on. They can use methods that would be illegal if they had to disclose what is in them.
Ok, don't take this the wrong way, because I am sensitive to ethical concerns, I'm just raising an eyebrow at the special case that seems to be being made for AI.
Unethical behaviour is endemic to society, at least certainly in the business world. Anyone who hasn't seen it, is either willfully looking away or just hasn't been affected by it...yet. There are even those on this thread that have been quite dismissive when I (and others) have pointed out examples of unethical behaviours of WotC. Go into a shop that belongs to a chain. I can almost guarantee that unethical stuff is going on, even if you can't see it. The market practically forces that behaviour.
But nobody cares, not long enough to actually think about it. But because WotC might do something that might be unethical with AI, despite their express claims to the contrary and we can't audit that...it's a substantial problem?
The base.concern, that AI uses "stolen" IP, is a lot greyer than some are claiming. We're now getting to "I'm not happy because there's a possibility that it's happened even when people are expressly saying otherwise". As I said, it seems a special case.
The environmental concerns are the same as any large scale data center/"bitcoin" mining operation. Which are substantial to say the least.
The impact is relative though. Let's look at water usage, since that was explicitly mentioned. How much would it cause DDB to expand its water footprint? A fraction of it's current size? Double? Hundredfold?
If it's a hundredfold, then yes, we'd need to be very careful about it. If it's a fraction...then I hope those complaining are vegetarians, because you can save a lot more water by eating vegetarian rather than meat based meals. You can have your water footprint just by making that change, and that's a lot more significant than what using DDB does currently (presumably, if using DDB really does use that much water that it rivals meat, then we need to be having a discussion about using DDB v physical books). It rings a little hollow if they're fretting about about a couple of litres once a week, give or take, on AI whilst wasting literally hundreds every day on a diet choice. If it's only a fraction, there are a lot more meaningful ways of reducing water usage. Then there's the spectrum in between.
Which is why scale is important to the discussion.
If you're not willing or able to to discuss in good faith, then don't be surprised if I don't respond, there are better things in life for me to do than humour you. This signature is that response.
Do any video games out today get as close to a DM as we would want?
Do they create content like good DM?
Do they run and mission as good as a DM?
Can they run a campaign even close to a real live DM?
hundreds of millions in investments and 20 years of development and we still feel like we are in a rail road campaign. Being guided into a specific direction. Oh they could ad a bunch of side missions or even make a dozen core missions but in the end we all know the outcome of most of them.
Now ad in a second player and the whole system starts to break down with conflicting decisions from multiple payers in the same game/team.
The Ai can only be as good as the sources it uses and from what the AI systems are putting out lately I would hate to see what data sources they are using.
The thing is, AI is now developing incredibly fast. I first started using ChatGPT (I say use...I don't really use it, I just experimented with it to see what it could do, I've done nothing actually productive with it) a year ago and it was shockingly awful. Now, it's actually capable of talking with me, understanding what I'm saying and giving meaningful responses. It used to just spit our garbage, but now it mostly gets things more or less right (albeit with the occasional curveball). It's now at the point where I can get more sense out of it than the average human on the internet.
Sure, it's not where it needs to be at to DM a game for us. However, it is improving, and it's improving rapidly. We'll see AIs capable of providing a decent DM experience within years. I'll be shocked if it isn't developed within two decades. I wouldn't be surprised if it's commonplace by the end of this decade.
Rollback Post to RevisionRollBack
If you're not willing or able to to discuss in good faith, then don't be surprised if I don't respond, there are better things in life for me to do than humour you. This signature is that response.
The thing is, AI is now developing incredibly fast. I first started using ChatGPT (I say use...I don't really use it, I just experimented with it to see what it could do, I've done nothing actually productive with it) a year ago and it was shockingly awful. Now, it's actually capable of talking with me, understanding what I'm saying and giving meaningful responses. It used to just spit our garbage, but now it mostly gets things more or less right (albeit with the occasional curveball). It's now at the point where I can get more sense out of it than the average human on the internet.
Sure, it's not where it needs to be at to DM a game for us. However, it is improving, and it's improving rapidly. We'll see AIs capable of providing a decent DM experience within years. I'll be shocked if it isn't developed within two decades. I wouldn't be surprised if it's commonplace by the end of this decade.
We're not going to see viable AI DMs without a paradigm shift in the tech. Generative models can't do it; it's not what they're made for.
They are, at the core, designed to output words that are probably the correct words to continue the output, based on their initial model and the text so far. That's all they do.
They can't know what rules are, or enforce them. They're just more text to guide the future text. If a wizard says "I cast fireball", they're likely to try to describe a fireball being cast, because, in their training data, when a wizard says they cast fireball, they can cast it. The fact that this wizard is first level is unlikely to enter into it.
They are unlikely to ever be able to cleanly keep multiple people and their input differentiated.
They don't have any ability to differentiate between types of information. They can't keep secrets. Even the preloaded prompts that they're supposed to hide are regularly extracted by users who are trying. Anything the model comes up with on the fly is going to have no such protection. What's the Big Bad up to? You'll likely find out as soon as you start inquiring.
They can't plan. No matter how improvisational the GM, you're not coming up with everything on the spur of the moment. Sometimes there's a thing that can follow from what's happened, and you take steps to set it up. Generative models can't; they operate entirely in the moment.
Like... The whole point of D&D in the modern age is collective storytelling and human interaction. An AI simply can't do that.
The human interaction part is the difficult part of an AI, and very well might never be possible. For example, I have one player who is a tad emotionally soft. Things I can get away with for my other players would cross a line for her. This both gives me, as the DM, a weapon and a pitfall - there is a balancing act of pushing on that player to manipulate this weakness, while also being careful to watch their mannerisms to make sure the player is not broken.
AI experts (which, ironically, this particular player happens to be), doubt we may ever be able to create an AI that can do that level of empathetic, individual-by-individual analysis.
But, you know what, neither can many people - just look at this thread itself, born of an individual who did not apply even a basic level of critical reading to an interview. Or any number of threads on this site where folks could have avoided their issues by using a little empathy and common sense.
Overall, an AI DM may never be perfect - but, for a lot of players, an imperfect DM is better than no DM at all. We have five decades of data showing players will put up with incompetent, rude, directionless, forced on the rails, and/or unempathetic, DMs if it means they get to play the game. Additionally, people have been enjoying video games or games like Gloomhaven which try to recreate the D&D experience for years - those kinds of games tend to sell very well and are often among the top-rated in their genera, so there clearly is a tolerance for a limited storyteller.
I expect an AI DM will be able to stretch the basic itch in a way which gives folks the basic level of satisfaction. Perhaps it is not going to weave a rich tapestry of love and death, victory and heartache; perhaps it will never be able to play on players’ particular quirks, bending them just far enough so they do not break. But, even if it never reaches the level that I expect from a DM, we are not too far away from it being able to replicate an experience sufficient for those willing to settle.
Don’t forget about Ai DM downtime to update software, glitches, crashes, lag, server maintenance, debugging, data attacks, and a host of other who knows what that could affect the system in ways one could only imagine.
Then you have the possibility the Ai flips out at a player, begins to speak gibberish and tries to influence the end user in ways some might think or believe is unethical, and for whatever reason that player goes screw this I can find better ( or worse), then what?
Lastly, IMO when would such an Ai DM become a pay to play service? Gotta pay the electric bills on those servers and equipment maintenance cost, and them things are not cheap in the slightest.
Rollback Post to RevisionRollBack
Place dental impression upon the metallic gluteus Maximus.
Don’t forget about Ai DM downtime to update software, glitches, crashes, lag, server maintenance, debugging, data attacks, and a host of other who knows what that could affect the system in ways one could only imagine.
This is true of playing on Zoom, or DDB, anyway.
Then you have the possibility the Ai flips out at a player, begins to speak gibberish and tries to influence the end user in ways some might think or believe is unethical, and for whatever reason that player goes screw this I can find better ( or worse), then what?
The former can happen, though not normally with modern generative models.
The latter, not so much. They can't "try to influence the user", because they have no intent. Can they be poked into spewing out inappropriate things? Absolutely. It's one of the many problems of the unbelievably huge training sets needed. But if you're not trying, it's pretty unlikely as I understand it, unless somebody is one of those people who finds acknowledging the existence of gay people to be unacceptable brainwashing or the like.
Lastly, IMO when would such an Ai DM become a pay to play service? Gotta pay the electric bills on those servers and equipment maintenance cost, and them things are not cheap in the slightest.
It'd have to be a pay service. These things are expensive to run and, incredibly expensive to train. Currently, their business model involves burning vast amounts of venture capitol money, but the VCs are eventually going to want a payday.
Then you have the possibility the Ai flips out at a player, begins to speak gibberish and tries to influence the end user in ways some might think or believe is unethical, and for whatever reason that player goes screw this I can find better ( or worse), then what?
The former can happen, though not normally with modern generative models.
The latter, not so much. They can't "try to influence the user", because they have no intent. Can they be poked into spewing out inappropriate things? Absolutely. It's one of the many problems of the unbelievably huge training sets needed. But if you're not trying, it's pretty unlikely as I understand it, unless somebody is one of those people who finds acknowledging the existence of gay people to be unacceptable brainwashing or the like.
Additionally, if we are disqualifying DMs from DMing because they might flip out, start acting unethically, and begin just spewing nonsense, I know a whole lot of humans who would be disqualified from DMing. I bet many of us here have horror stories of a DM throwing a little hissy fit and causing the game to go pear shaped.
Which, again, brings the conversation back to a simple reality - AI DMs are probably never going to be as good as a good human DM, but the past five decades have shown that players are pretty darn tolerant of bad DMs if the alternative is no DM. Quite frankly, I think a lot of people would probably find the “oops, our AI DM threw a fit and went crazy” a whole lot more enjoyable and forgivable than “oops, our human DM is kind of a terrible person.”
I remember when people were training ChatGPT 2 to run a campaign. The funniest thing I find with ChatGPT 2 through 4 is that it does laughable parsing when things get nuanced.
I generally find campaigns to be more entertaining when they're nuanced... but I also find ChatGPT highly entertaining when nuance goes completely over its head (so to type).
Sure. A trained digital DM will be aligned more towards DMing, but it'll still flail hilariously with player creativity. Its usefulness will be limited to superficial processing of campaign mechanics.
Flail is not a typo. It's pretty much what ChatGPT does when I get down to things that are very specific. DALL-E 3 is even more hilarious the more specific I try to be. Neither are intended to be creativity replacers. Prompt mashers? Sure. That can help people think of something in different ways the same way that Tarot can help people think of problems in different ways, but that reduces them to mere tools, toys, and gimmicks.
I also remember ChatGPT 2 claiming that it could only be as unbiased as its training data and could make no assurances that the people who provided the data weren't being biased. ChatGPT 4 straight up claims it is unbiased—a disturbing development.
Rollback Post to RevisionRollBack
Human. Male. Possibly. Don't be a divider. My characters' backgrounds are written like instruction manuals rather than stories. My opinion and preferences don't mean you're wrong. I am 99.7603% convinced that the digital dice are messing with me. I roll high when nobody's looking and low when anyone else can see.🎲 “It's a bit early to be thinking about an epitaph. No?” will be my epitaph.
Then you have the possibility the Ai flips out at a player, begins to speak gibberish and tries to influence the end user in ways some might think or believe is unethical, and for whatever reason that player goes screw this I can find better ( or worse), then what?
The former can happen, though not normally with modern generative models.
The latter, not so much. They can't "try to influence the user", because they have no intent. Can they be poked into spewing out inappropriate things? Absolutely. It's one of the many problems of the unbelievably huge training sets needed. But if you're not trying, it's pretty unlikely as I understand it, unless somebody is one of those people who finds acknowledging the existence of gay people to be unacceptable brainwashing or the like.
Additionally, if we are disqualifying DMs from DMing because they might flip out, start acting unethically, and begin just spewing nonsense, I know a whole lot of humans who would be disqualified from DMing. I bet many of us here have horror stories of a DM throwing a little hissy fit and causing the game to go pear shaped.
Which, again, brings the conversation back to a simple reality - AI DMs are probably never going to be as good as a good human DM, but the past five decades have shown that players are pretty darn tolerant of bad DMs if the alternative is no DM. Quite frankly, I think a lot of people would probably find the “oops, our AI DM threw a fit and went crazy” a whole lot more enjoyable and forgivable than “oops, our human DM is kind of a terrible person.”
It’s not about dismissing the fact that there are always rotten apples in bushels, it’s about if such a system does become implemented and such an issue becomes relevant, we are all just supposed to go “Oh well, no harm no foul” and continue on like it’s nothing? ( a human GM/DM would be pulled aside, have a quick word that would address the issue, and go from there. Digital GM/DM, reboot and bury the issue as ‘Technical difficulties’ and user misuse? )
Sounds like a quick way to get in really deep hot water by excuses and passing the blame while attempting to half ass a solution.
IMHO, if C.C. had a lick of sense, he’d put stock in investments that welcomed and supported GM/DM development and support, rather then keep tossing crap at the wall and seeing what sticks.
Rollback Post to RevisionRollBack
Place dental impression upon the metallic gluteus Maximus.
Computer DMs have been around for more than fifty years -- Zork dates to 1977, Rogue dates to 1980. I would not be at all surprised by generative AI being used for cRPGs (possibly including a D&D-based cRPG), though video game AI is actually surprisingly primitive. However, people aren't going to call them "AI DMs", they're going to call them "cRPGs".
Computer DMs have been around for more than fifty years -- Zork dates to 1977, Rogue dates to 1980. I would not be at all surprised by generative AI being used for cRPGs (possibly including a D&D-based cRPG), though video game AI is actually surprisingly primitive. However, people aren't going to call them "AI DMs", they're going to call them "cRPGs".
Actually it be better classed as a “ARPG” rather than a “cRPG”, as it’s the DM ( artificial or not ) that has to deterministically evaluate the situation and make a judgement based on past, present, and future events that might influence ( bias ) further situations that might cause what some call ‘Logic Lock’. ( as a programmer first lesson was programs do what you tell it to do, not what you wanted or expected it to do. )
In the case of D&D publicly people will try and define the system as a “Ai DM/GM”, and others will call it whatever, and ether way currently the tech just is not ready yet to be implemented, it’s still in development and getting better, but still decades away from anything that could be viable.
Rollback Post to RevisionRollBack
Place dental impression upon the metallic gluteus Maximus.
Someone at WoTC please tell Chris Cocks to go lay down.
In a March 1st interview with Venturebeats Cocks states he forsees a future where AI is generating content.
Absolutely not! The OGL debacle should have been a lesson in the community not putting up with nonsense. The blowback on AI tools used to touch up art in Bigby's should have been a lesson, uet here we are.
Hasbro needs to sell WoTC or Chris Vocks needs to either leave or stay out of it. He clearly is not friendly to this community.
Why is Hasbo/WoTC the one company that shouldn't be investigating the potential uses of AI? The company has to think about its investors', consumers', and employees' (current and future) interest in how AI can service them. Competitors of theirs are surely paying attention and exploring its use.
Someone at WoTC please tell Chris Cocks to go lay down.
In a March 1st interview with Venturebeats Cocks states he forsees a future where AI is generating content.
Absolutely not! The OGL debacle should have been a lesson in the community not putting up with nonsense. The blowback on AI tools used to touch up art in Bigby's should have been a lesson, uet here we are.
Hasbro needs to sell WoTC or Chris Vocks needs to either leave or stay out of it. He clearly is not friendly to this community.
Why is Hasbo/WoTC the one company that shouldn't be investigating the potential uses of AI? The company has to think about its investors', consumers', and employees' (current and future) interest in how AI can service them. Competitors of theirs are surely paying attention and exploring its use.
I’d rather be offed an Ai ASSISTANT then a bot that tries to run a game. IMHO
The arguments I hear against AI sound an awful lot like the arguments people had against past paradigm-shifting technologies like television or cars or the internet. Those arguments didn't age well. And as for "ChatGPT 4 has severe limitations as a DM," of course it does. This is the infancy of AI. The first cars couldn't outrun horses either.
No, we're not going to take a brave stand until it goes away. Because it's not going away. Anticipating how it is going to transform your industry is basically required for any company that wants to still be relevant 10 years from now. Now what that actually looks like is just a guess at this point, which is why we need to be talking about it and thinking about it. AI has at least as much potential to assist humans as it does to "replace" them. But rejecting any discourse like an old man shaking his fist at kids on his lawn is not going to help steer things in a productive direction.
"High ROI for investors" would be D&D's objective whether AI was involved with it or not. It's a product made by a for-profit corporation.
AI is, in both the literal meaning of the word and as a D&D metaphor, a mimic
Active characters:
Edoumiaond Willegume "Eddie" Podslee, Vegetanian scholar (College of Spirits bard)
Lan Kidogo, mapach archaeologist and treasure hunter (Knowledge cleric)
Peter "the Pied Piper" Hausler, human con artist/remover of vermin (Circle of the Shepherd druid)
PIPA - Planar Interception/Protection Aeormaton, warforged bodyguard and ex-wizard hunter (Warrior of the Elements monk/Cartographer artificer)
Xhekhetiel, halfling survivor of a Betrayer Gods cult (Runechild sorcerer/fighter)
Do any video games out today get as close to a DM as we would want?
Do they create content like good DM?
Do they run and mission as good as a DM?
Can they run a campaign even close to a real live DM?
hundreds of millions in investments and 20 years of development and we still feel like we are in a rail road campaign. Being guided into a specific direction. Oh they could ad a bunch of side missions or even make a dozen core missions but in the end we all know the outcome of most of them.
Now ad in a second player and the whole system starts to break down with conflicting decisions from multiple payers in the same game/team.
The Ai can only be as good as the sources it uses and from what the AI systems are putting out lately I would hate to see what data sources they are using.
As I said in my post on the first page, we've reached the point where AIs are starting to be trained on content that was produced by AIs.
Find your own truth, choose your enemies carefully, and never deal with a dragon.
"Canon" is what's factual to D&D lore. "Cannon" is what you're going to be shot with if you keep getting the word wrong.
That does explain some experiences, especially when the Ai goes f’it here’s a human.
Place dental impression upon the metallic gluteus Maximus.
Ok, don't take this the wrong way, because I am sensitive to ethical concerns, I'm just raising an eyebrow at the special case that seems to be being made for AI.
Unethical behaviour is endemic to society, at least certainly in the business world. Anyone who hasn't seen it, is either willfully looking away or just hasn't been affected by it...yet. There are even those on this thread that have been quite dismissive when I (and others) have pointed out examples of unethical behaviours of WotC. Go into a shop that belongs to a chain. I can almost guarantee that unethical stuff is going on, even if you can't see it. The market practically forces that behaviour.
But nobody cares, not long enough to actually think about it. But because WotC might do something that might be unethical with AI, despite their express claims to the contrary and we can't audit that...it's a substantial problem?
The base.concern, that AI uses "stolen" IP, is a lot greyer than some are claiming. We're now getting to "I'm not happy because there's a possibility that it's happened even when people are expressly saying otherwise". As I said, it seems a special case.
The impact is relative though. Let's look at water usage, since that was explicitly mentioned. How much would it cause DDB to expand its water footprint? A fraction of it's current size? Double? Hundredfold?
If it's a hundredfold, then yes, we'd need to be very careful about it. If it's a fraction...then I hope those complaining are vegetarians, because you can save a lot more water by eating vegetarian rather than meat based meals. You can have your water footprint just by making that change, and that's a lot more significant than what using DDB does currently (presumably, if using DDB really does use that much water that it rivals meat, then we need to be having a discussion about using DDB v physical books). It rings a little hollow if they're fretting about about a couple of litres once a week, give or take, on AI whilst wasting literally hundreds every day on a diet choice. If it's only a fraction, there are a lot more meaningful ways of reducing water usage. Then there's the spectrum in between.
Which is why scale is important to the discussion.
If you're not willing or able to to discuss in good faith, then don't be surprised if I don't respond, there are better things in life for me to do than humour you. This signature is that response.
The thing is, AI is now developing incredibly fast. I first started using ChatGPT (I say use...I don't really use it, I just experimented with it to see what it could do, I've done nothing actually productive with it) a year ago and it was shockingly awful. Now, it's actually capable of talking with me, understanding what I'm saying and giving meaningful responses. It used to just spit our garbage, but now it mostly gets things more or less right (albeit with the occasional curveball). It's now at the point where I can get more sense out of it than the average human on the internet.
Sure, it's not where it needs to be at to DM a game for us. However, it is improving, and it's improving rapidly. We'll see AIs capable of providing a decent DM experience within years. I'll be shocked if it isn't developed within two decades. I wouldn't be surprised if it's commonplace by the end of this decade.
If you're not willing or able to to discuss in good faith, then don't be surprised if I don't respond, there are better things in life for me to do than humour you. This signature is that response.
We're not going to see viable AI DMs without a paradigm shift in the tech. Generative models can't do it; it's not what they're made for.
They are, at the core, designed to output words that are probably the correct words to continue the output, based on their initial model and the text so far. That's all they do.
They can't know what rules are, or enforce them. They're just more text to guide the future text. If a wizard says "I cast fireball", they're likely to try to describe a fireball being cast, because, in their training data, when a wizard says they cast fireball, they can cast it. The fact that this wizard is first level is unlikely to enter into it.
They are unlikely to ever be able to cleanly keep multiple people and their input differentiated.
They don't have any ability to differentiate between types of information. They can't keep secrets. Even the preloaded prompts that they're supposed to hide are regularly extracted by users who are trying. Anything the model comes up with on the fly is going to have no such protection. What's the Big Bad up to? You'll likely find out as soon as you start inquiring.
They can't plan. No matter how improvisational the GM, you're not coming up with everything on the spur of the moment. Sometimes there's a thing that can follow from what's happened, and you take steps to set it up. Generative models can't; they operate entirely in the moment.
The human interaction part is the difficult part of an AI, and very well might never be possible. For example, I have one player who is a tad emotionally soft. Things I can get away with for my other players would cross a line for her. This both gives me, as the DM, a weapon and a pitfall - there is a balancing act of pushing on that player to manipulate this weakness, while also being careful to watch their mannerisms to make sure the player is not broken.
AI experts (which, ironically, this particular player happens to be), doubt we may ever be able to create an AI that can do that level of empathetic, individual-by-individual analysis.
But, you know what, neither can many people - just look at this thread itself, born of an individual who did not apply even a basic level of critical reading to an interview. Or any number of threads on this site where folks could have avoided their issues by using a little empathy and common sense.
Overall, an AI DM may never be perfect - but, for a lot of players, an imperfect DM is better than no DM at all. We have five decades of data showing players will put up with incompetent, rude, directionless, forced on the rails, and/or unempathetic, DMs if it means they get to play the game. Additionally, people have been enjoying video games or games like Gloomhaven which try to recreate the D&D experience for years - those kinds of games tend to sell very well and are often among the top-rated in their genera, so there clearly is a tolerance for a limited storyteller.
I expect an AI DM will be able to stretch the basic itch in a way which gives folks the basic level of satisfaction. Perhaps it is not going to weave a rich tapestry of love and death, victory and heartache; perhaps it will never be able to play on players’ particular quirks, bending them just far enough so they do not break. But, even if it never reaches the level that I expect from a DM, we are not too far away from it being able to replicate an experience sufficient for those willing to settle.
Don’t forget about Ai DM downtime to update software, glitches, crashes, lag, server maintenance, debugging, data attacks, and a host of other who knows what that could affect the system in ways one could only imagine.
Then you have the possibility the Ai flips out at a player, begins to speak gibberish and tries to influence the end user in ways some might think or believe is unethical, and for whatever reason that player goes screw this I can find better ( or worse), then what?
Lastly, IMO when would such an Ai DM become a pay to play service? Gotta pay the electric bills on those servers and equipment maintenance cost, and them things are not cheap in the slightest.
Place dental impression upon the metallic gluteus Maximus.
This is true of playing on Zoom, or DDB, anyway.
The former can happen, though not normally with modern generative models.
The latter, not so much. They can't "try to influence the user", because they have no intent. Can they be poked into spewing out inappropriate things? Absolutely. It's one of the many problems of the unbelievably huge training sets needed. But if you're not trying, it's pretty unlikely as I understand it, unless somebody is one of those people who finds acknowledging the existence of gay people to be unacceptable brainwashing or the like.
It'd have to be a pay service. These things are expensive to run and, incredibly expensive to train. Currently, their business model involves burning vast amounts of venture capitol money, but the VCs are eventually going to want a payday.
Additionally, if we are disqualifying DMs from DMing because they might flip out, start acting unethically, and begin just spewing nonsense, I know a whole lot of humans who would be disqualified from DMing. I bet many of us here have horror stories of a DM throwing a little hissy fit and causing the game to go pear shaped.
Which, again, brings the conversation back to a simple reality - AI DMs are probably never going to be as good as a good human DM, but the past five decades have shown that players are pretty darn tolerant of bad DMs if the alternative is no DM. Quite frankly, I think a lot of people would probably find the “oops, our AI DM threw a fit and went crazy” a whole lot more enjoyable and forgivable than “oops, our human DM is kind of a terrible person.”
I remember when people were training ChatGPT 2 to run a campaign. The funniest thing I find with ChatGPT 2 through 4 is that it does laughable parsing when things get nuanced.
I generally find campaigns to be more entertaining when they're nuanced... but I also find ChatGPT highly entertaining when nuance goes completely over its head (so to type).
Sure. A trained digital DM will be aligned more towards DMing, but it'll still flail hilariously with player creativity. Its usefulness will be limited to superficial processing of campaign mechanics.
Flail is not a typo. It's pretty much what ChatGPT does when I get down to things that are very specific. DALL-E 3 is even more hilarious the more specific I try to be. Neither are intended to be creativity replacers. Prompt mashers? Sure. That can help people think of something in different ways the same way that Tarot can help people think of problems in different ways, but that reduces them to mere tools, toys, and gimmicks.
I also remember ChatGPT 2 claiming that it could only be as unbiased as its training data and could make no assurances that the people who provided the data weren't being biased. ChatGPT 4 straight up claims it is unbiased—a disturbing development.
Human. Male. Possibly. Don't be a divider.
My characters' backgrounds are written like instruction manuals rather than stories. My opinion and preferences don't mean you're wrong.
I am 99.7603% convinced that the digital dice are messing with me. I roll high when nobody's looking and low when anyone else can see.🎲
“It's a bit early to be thinking about an epitaph. No?” will be my epitaph.
It’s not about dismissing the fact that there are always rotten apples in bushels, it’s about if such a system does become implemented and such an issue becomes relevant, we are all just supposed to go “Oh well, no harm no foul” and continue on like it’s nothing? ( a human GM/DM would be pulled aside, have a quick word that would address the issue, and go from there. Digital GM/DM, reboot and bury the issue as ‘Technical difficulties’ and user misuse? )
Sounds like a quick way to get in really deep hot water by excuses and passing the blame while attempting to half ass a solution.
IMHO, if C.C. had a lick of sense, he’d put stock in investments that welcomed and supported GM/DM development and support, rather then keep tossing crap at the wall and seeing what sticks.
Place dental impression upon the metallic gluteus Maximus.
Computer DMs have been around for more than fifty years -- Zork dates to 1977, Rogue dates to 1980. I would not be at all surprised by generative AI being used for cRPGs (possibly including a D&D-based cRPG), though video game AI is actually surprisingly primitive. However, people aren't going to call them "AI DMs", they're going to call them "cRPGs".
Actually it be better classed as a “ARPG” rather than a “cRPG”, as it’s the DM ( artificial or not ) that has to deterministically evaluate the situation and make a judgement based on past, present, and future events that might influence ( bias ) further situations that might cause what some call ‘Logic Lock’. ( as a programmer first lesson was programs do what you tell it to do, not what you wanted or expected it to do. )
In the case of D&D publicly people will try and define the system as a “Ai DM/GM”, and others will call it whatever, and ether way currently the tech just is not ready yet to be implemented, it’s still in development and getting better, but still decades away from anything that could be viable.
Place dental impression upon the metallic gluteus Maximus.
How is this not central to a horror movie? ;-)
Why is Hasbo/WoTC the one company that shouldn't be investigating the potential uses of AI? The company has to think about its investors', consumers', and employees' (current and future) interest in how AI can service them. Competitors of theirs are surely paying attention and exploring its use.
I’d rather be offed an Ai ASSISTANT then a bot that tries to run a game. IMHO
Place dental impression upon the metallic gluteus Maximus.
The arguments I hear against AI sound an awful lot like the arguments people had against past paradigm-shifting technologies like television or cars or the internet. Those arguments didn't age well. And as for "ChatGPT 4 has severe limitations as a DM," of course it does. This is the infancy of AI. The first cars couldn't outrun horses either.
No, we're not going to take a brave stand until it goes away. Because it's not going away. Anticipating how it is going to transform your industry is basically required for any company that wants to still be relevant 10 years from now. Now what that actually looks like is just a guess at this point, which is why we need to be talking about it and thinking about it. AI has at least as much potential to assist humans as it does to "replace" them. But rejecting any discourse like an old man shaking his fist at kids on his lawn is not going to help steer things in a productive direction.
My homebrew subclasses (full list here)
(Artificer) Swordmage | Glasswright | (Barbarian) Path of the Savage Embrace
(Bard) College of Dance | (Fighter) Warlord | Cannoneer
(Monk) Way of the Elements | (Ranger) Blade Dancer
(Rogue) DaggerMaster | Inquisitor | (Sorcerer) Riftwalker | Spellfist
(Warlock) The Swarm