I've tried looking for this. GPT is horrific for D&D rules. But I'm sure someone must have made a custom AI that has all of it stored--I just can't find one. Anyone happen to know of one?
LLMs don't follow rules, they can't do logic. GPT has read all the D&D rules but it can't understand them so it can't follow / enforce them, neither can any other LLM.
What Agilemind said. At their core, LLMs just predict the most likely words to come next. They can't understand what the words mean. If you say "I cast fireball", they're going to describe a fireball being cast, because that's what happens in their example texts. The fact that you're a first-level fighter won't come into it.
The only way a custom AI for D&D would happen is for WotC to do it themselves or license it to an external party. Also, this site integrates the rules quite well for character management and now Maps!
I do use AI in my games. Less as a live use of the rules and more of a "Find flaws in my idea" tool. That and "Here's a template, build me a profile for a Blacksmith who has a quest to share." functionality.
The only way a custom AI for D&D would happen is for WotC to do it themselves or license it to an external party. Also, this site integrates the rules quite well for character management and now Maps!
I do use AI in my games. Less as a live use of the rules and more of a "Find flaws in my idea" tool. That and "Here's a template, build me a profile for a Blacksmith who has a quest to share." functionality.
I realize this is entirely subjective, but I will never understand why people want to have AI tools do the fun parts.
The only way a custom AI for D&D would happen is for WotC to do it themselves or license it to an external party. Also, this site integrates the rules quite well for character management and now Maps!
Even if WotC were to do it themselves, the fundamental problems of LLMs would still exist.
Now, in theory, they could make a game engine to implement all the game mechanics, but:
It's really hard
There's too much that's up for interpretation in real-life game scenarios. As soon as you have real DMs and real players, they're going to try to do things that the engine isn't prepared for.
What's worse, they'll disagree with the engine and want to change things
Now, in theory, they could make a game engine to implement all the game mechanics, but:
It's really hard
There's too much that's up for interpretation in real-life game scenarios. As soon as you have real DMs and real players, they're going to try to do things that the engine isn't prepared for.
What's worse, they'll disagree with the engine and want to change things
Thinking about it now.... It might not actually be that difficult to make a sufficient game engine that you could plug into the LLM, if you built it off of the generic DM guidance, and a digital character sheet..
What you need is something that picks an appropriate skill check, which LLMs could be trained to do from all the reddit / online discussion forums / questions. Then you need the game engine to set a numerical DC based on the LLMs output, and determines success / failure through appropriate mechanics, then feeds that back into the LLM to describe the outcome. Something like:
PLAYER : "I attempt to jump through the window, into the house." Prompt Engine : "What skill check would be required to jump through a window into a house?" LLM : "You should use an Acrobatics check. The difficulty depends on how high ...." Game Engine : detects Acrobatics check & spits out : "Make an Acrobatics check!" Player response: "I got a 14." Game Engine : detects 14 & stores it Prompt Engine : copies the LLM's previous description of the house. "How difficult would it be to jump through a window?" LLM : "It would be hard and dangerous to jump through such a window." Game Engine : detects "hard" and "dangerous" and looks up the player character level, and cross references this with the GM guide tables to pick a DC and a damage roll. Then compared its chosen DC to the 14 from earlier to determine success / failure, and rolls damage. Sends to Prompt Engine: "They succeeded but take 10 damage." Prompt Engine: "They succeeded in jumping through the window, but took 10 damage in doing so." LLM: "You cover your face with your hands and leap through the window, landing gracefully on the other side, but the broken glass cuts you causing you to take 10 piercing damage."
Now, in theory, they could make a game engine to implement all the game mechanics, but:
It's really hard
There's too much that's up for interpretation in real-life game scenarios. As soon as you have real DMs and real players, they're going to try to do things that the engine isn't prepared for.
What's worse, they'll disagree with the engine and want to change things
Thinking about it now.... It might not actually be that difficult to make a sufficient game engine that you could plug into the LLM, if you built it off of the generic DM guidance, and a digital character sheet..
What you need is something that picks an appropriate skill check, which LLMs could be trained to do from all the reddit / online discussion forums / questions. Then you need the game engine to set a numerical DC based on the LLMs output, and determines success / failure through appropriate mechanics, then feeds that back into the LLM to describe the outcome. Something like:
PLAYER : "I attempt to jump through the window, into the house." Prompt Engine : "What skill check would be required to jump through a window into a house?" LLM : "You should use an Acrobatics check. The difficulty depends on how high ...." Game Engine : detects Acrobatics check & spits out : "Make an Acrobatics check!" Player response: "I got a 14." Game Engine : detects 14 & stores it Prompt Engine : copies the LLM's previous description of the house. "How difficult would it be to jump through a window?" LLM : "It would be hard and dangerous to jump through such a window." Game Engine : detects "hard" and "dangerous" and looks up the player character level, and cross references this with the GM guide tables to pick a DC and a damage roll. Then compared its chosen DC to the 14 from earlier to determine success / failure, and rolls damage. Sends to Prompt Engine: "They succeeded but take 10 damage." Prompt Engine: "They succeeded in jumping through the window, but took 10 damage in doing so." LLM: "You cover your face with your hands and leap through the window, landing gracefully on the other side, but the broken glass cuts you causing you to take 10 piercing damage."
This sounds like an enormous amount of money and work to build something to do very poorly what a human being can already do much better.
This sounds like an enormous amount of money and work to build something to do very poorly what a human being can already do much better.
Oh, 100% it would be mediocre at best, and it wouldn't solve the narrative inconsistencies of LLMs between scenes. But you could probably get it to a level where young kids or exhausted adults would be satisfied.
Fast adjudication, it's precisely about saving time. Not just for me, but for my players. If you want to waste time, yes, google it or read the books.
Why are you looking up rules mid game? Fast adjudication is easy you say "I think it works like XYZ, so we're going to run it that way for now, and [player who is rules lawyering] can look into it before next game."
LLMs don't follow rules, they can't do logic. GPT has read all the D&D rules but it can't understand them so it can't follow / enforce them, neither can any other LLM.
That's a popular sentiment. I believe it to be largely untrue, mostly because the RPG community don't like the idea. Not pointing at anyone in particular, it's just ... an observation.
GPT is pretty good at legal texts. I know this for a fact because I work with legal texts, and I use GPT to look up stuff and find interpretations. And it does this pretty well. Does it do it flawlessly? Oh hell no. But well enough to save me time? Heck yes.
I work with Danish unemployment regulations. It's the single largest law text in our fair nation, and is a factor 5:1 to the core rulebooks of DND.
There's a core difference: Looking up §56 is not the same as understanding how class mechanics synergize when multi classing (as an example). But I see no reason to expect that GPT should be able to handle the rules in general.
I could be wrong, though. I've had a few simple - but largely succesful - games with GPT as game master. But they were a fun exercise for my 5yo, not a full group of adults.
Rollback Post to RevisionRollBack
Blanket disclaimer: I only ever state opinion. But I can sound terribly dogmatic - so if you feel I'm trying to tell you what to think, I'm really not, I swear. I'm telling you what I think, that's all.
Fast adjudication, it's precisely about saving time. Not just for me, but for my players. If you want to waste time, yes, google it or read the books.
I say this not wishing malice or to appear horrible, but having been a teacher and seeing how most people don't ever receive this advice - learn how to use a book.
If you know how to properly use a table of contents, index, and glossary trust when I say that with a physical book you can find what you're looking for in less time that it takes to type your query into a chatbot. Sadly WotC are silly enough not to offer PDFs, but again if digital books are your thing, the same tools exist in well designed PDFs especially combined with a good CTRL+F search. There really is never a use case for someone like myself who has learnt how to quickly reference and utilise books where there is a faster solution. The only times I ever struggle are when the writers and publishers have designed their product poorly. Monsters of the Multiverse, and the new 5.5e books are great examples of poor design. An index when used correctly is a powerful feature of a book. I can get what I need from 2014 Monster Manual 2x faster than MotM due to the correct indexing. In fact the very fact that the 5.5e Monster Manual is designed in the way is has been is testament to the very fact that people don't seem to have learnt the purpose of an index any more. However, the very fact that the new book is alphabetical also makes it a cinch to locate what I'm looking for far quicker than speaking or typing my query into a glorified chatbot. So again, and I don't mean malice here - learn how to use an actual book and it'll speed up your ability to use them.
LLMs are akin to NFTs and 3D Films, and VR - they're a fad upon which a bubble has been built. That bubble is bursting given the inherent costs needed to scale and make the techs economically viable. I've seen too many tech bubbles burst and lost my first business due to such a bubble so I can recognise them. Please don't lean to heavily on a tech that will become useless in a few years' time.
Agile, Regarding LLM's: They're actually not bad, as Acromos noted. "They can't do logic'. Sure, but they have auxiliary modules which CAN do logic, which are often integrated seamlessly. For D&D: they're far, far from good, but can provide good feedback. It's an great tool for new and experienced players to understand rules. It's an excellent thought partner for adjudication on my end. While AI's very often steer my players wrong and frequently lies, it can often provide great feedback. Anything that helps my players understand the game, I support---less questions I need to answer.
And it's not just about rules--it's about finding resources and direct links to content over the topics I'm looking at. I think the greatest step forward for LLM's currently would just be them to be able to say "I don't know."
Why am I looking up rules mid-game? I never said I was. But it would be useful for players between turns to help understand their classes and abilities. They are already using AI to help understand what they're doing and AI often steers them wrong (but often right).
Martin, Thank you for your note that highlights that I should learn how to use a book better. I am a teacher----and the thing is I never really did learn how to use a book, and what you're saying just highlights my existing trauma. Books are slow and manual labor intensive--there is no search that you'll conduct in a book that will outpace digital tools. The issue you describe with poor design exist only because of your methodology. Learning how to type faster is more useful than learning how to use a glossary. If you have to open a book, you're already behind the times. If you have teaching experience, it's possible you may recall that US teaching standards used to include learning to use a card catalog--but standards evolve.
"LLM's are a fad." This reads like you don't know enough about this topic. Token costs are dropping 10x per year. Altman has already stated that they're at 150x less cost for similar quality between GPT revisions. Then on top of that, hardware is generationally increasing efficiency. Thousands of positions are ALREADY being eliminated by AI replacing them because it is more economically viable. There's loads of data on this. But this is beside the topic.
Answering the original question. It doesn't sound like a resource like this exists yet. It certainly could be done--maybe that's the way of the future for niche-content. They'll get a group of thousands of DM's who will provide feedback such to the point that a model can make accurate predictions.
You don't want an LLM, you want a search engine. LLMs are about producing answer similar to what other people have said, and that's not what you want -- you want to ask a natural language query and get the specific rule or rules covering your question.
Now, there certainly are AI search engines, but I think they're still reliant on crowdsourcing methods (index a bunch of places where people ask the question and get answers; the most popular answers are presumed true) and as such, reliant on big data -- you can't just toss a book into the hopper, you need to toss a million people's answers to questions into the hopper as well. None of this is impossible, but I don't think anyone actually has the data they need.
"LLM's are a fad." This reads like you don't know enough about this topic. Token costs are dropping 10x per year. Altman has already stated that they're at 150x less cost for similar quality between GPT revisions. Then on top of that, hardware is generationally increasing efficiency. Thousands of positions are ALREADY being eliminated by AI replacing them because it is more economically viable. There's loads of data on this. But this is beside the topic.
Absolutely lots of companies are "reducing costs" by laying off people and telling the remaining few to just use AI to replace them. However, most of these are in minimum-wage & outsourced jobs that companies don't care about. It's your customer service, your telemarketers, your help lines, etc... Even the modern sophisicated LLMs can't even run a vending machine : https://www.anthropic.com/research/project-vend-1
Current AI/LLM are nothing more than "Dumb" AI's designed to automate specific tasks by extrapolating from a source model that gets fed to them to draw conclusions. It's frustrating when more often than not I have to go and rewrite the AI-slop that my team puts out because it doesn't meet production standards at work since the LLM cannot logically apply them. You're not gonna find what you're looking for here because AI can't do what you want. Also, AI in any kind of official capacity is a controversial subject in the TTRPG community; just look at what happened with SpellJammer and AI art.
Current AI/LLM are nothing more than "Dumb" AI's designed to automate specific tasks by extrapolating from a source model that gets fed to them to draw conclusions. It's frustrating when more often than not I have to go and rewrite the AI-slop that my team puts out because it doesn't meet production standards at work since the LLM cannot logically apply them. You're not gonna find what you're looking for here because AI can't do what you want. Also, AI in any kind of official capacity is a controversial subject in the TTRPG community; just look at what happened with SpellJammer and AI art.
Bigby's was the only confirmed usage.
There was no usage in Spelljammer.
If your source is Indestructoboy, he later retracted it, & was forced to admit he used a tool that used AI to detect AI in the first place.
If your source is Dungeons & Discourse. Professsor DM or a similar channel, these channels tend to have ulterior motives and/or are about milking drama, and likely sourced Indestructoboy's lie.
If your source is Reddit, false AI flags happen all the day there. One of my best friends got temp banned from a subreddit for allegedly using AI to make a post....he's autistic, & the accuser & moderator wouldn't hear him out because of Reddit Detectives' inability to admit that they're wrong.
Rollback Post to RevisionRollBack
DM, player & homebrewer(Current homebrew project is an unofficial conversion of SBURB/SGRUB from Homestuck into DND 5e)
Once made Maxwell's Silver Hammer come down upon Strahd's head to make sure he was dead.
Always study & sharpen philosophical razors. They save a lot of trouble.
Sorry, I meant Bigby's; got the name wrong. I don't watch either of those channels you mentioned because of the same reason you said. I just wanted to highlight how that's a touchy subject
Rollback Post to RevisionRollBack
To post a comment, please login or register a new account.
Hi All!
I've tried looking for this. GPT is horrific for D&D rules. But I'm sure someone must have made a custom AI that has all of it stored--I just can't find one. Anyone happen to know of one?
Cheers!
LLMs don't follow rules, they can't do logic. GPT has read all the D&D rules but it can't understand them so it can't follow / enforce them, neither can any other LLM.
What Agilemind said. At their core, LLMs just predict the most likely words to come next. They can't understand what the words mean. If you say "I cast fireball", they're going to describe a fireball being cast, because that's what happens in their example texts. The fact that you're a first-level fighter won't come into it.
What is the possible benefit of an AI that knows the rules? Read the books, or google something specific if you want to save time.
The only way a custom AI for D&D would happen is for WotC to do it themselves or license it to an external party. Also, this site integrates the rules quite well for character management and now Maps!
I do use AI in my games. Less as a live use of the rules and more of a "Find flaws in my idea" tool. That and "Here's a template, build me a profile for a Blacksmith who has a quest to share." functionality.
I realize this is entirely subjective, but I will never understand why people want to have AI tools do the fun parts.
pronouns: he/she/they
Even if WotC were to do it themselves, the fundamental problems of LLMs would still exist.
Now, in theory, they could make a game engine to implement all the game mechanics, but:
Thinking about it now.... It might not actually be that difficult to make a sufficient game engine that you could plug into the LLM, if you built it off of the generic DM guidance, and a digital character sheet..
What you need is something that picks an appropriate skill check, which LLMs could be trained to do from all the reddit / online discussion forums / questions. Then you need the game engine to set a numerical DC based on the LLMs output, and determines success / failure through appropriate mechanics, then feeds that back into the LLM to describe the outcome. Something like:
PLAYER : "I attempt to jump through the window, into the house."
Prompt Engine : "What skill check would be required to jump through a window into a house?"
LLM : "You should use an Acrobatics check. The difficulty depends on how high ...."
Game Engine : detects Acrobatics check & spits out : "Make an Acrobatics check!"
Player response: "I got a 14."
Game Engine : detects 14 & stores it
Prompt Engine : copies the LLM's previous description of the house. "How difficult would it be to jump through a window?"
LLM : "It would be hard and dangerous to jump through such a window."
Game Engine : detects "hard" and "dangerous" and looks up the player character level, and cross references this with the GM guide tables to pick a DC and a damage roll. Then compared its chosen DC to the 14 from earlier to determine success / failure, and rolls damage. Sends to Prompt Engine: "They succeeded but take 10 damage."
Prompt Engine: "They succeeded in jumping through the window, but took 10 damage in doing so."
LLM: "You cover your face with your hands and leap through the window, landing gracefully on the other side, but the broken glass cuts you causing you to take 10 piercing damage."
This sounds like an enormous amount of money and work to build something to do very poorly what a human being can already do much better.
pronouns: he/she/they
Oh, 100% it would be mediocre at best, and it wouldn't solve the narrative inconsistencies of LLMs between scenes. But you could probably get it to a level where young kids or exhausted adults would be satisfied.
Fast adjudication, it's precisely about saving time. Not just for me, but for my players. If you want to waste time, yes, google it or read the books.
Why are you looking up rules mid game? Fast adjudication is easy you say "I think it works like XYZ, so we're going to run it that way for now, and [player who is rules lawyering] can look into it before next game."
No AI will ever be faster than Googling it.
That's a popular sentiment. I believe it to be largely untrue, mostly because the RPG community don't like the idea. Not pointing at anyone in particular, it's just ... an observation.
GPT is pretty good at legal texts. I know this for a fact because I work with legal texts, and I use GPT to look up stuff and find interpretations. And it does this pretty well. Does it do it flawlessly? Oh hell no. But well enough to save me time? Heck yes.
I work with Danish unemployment regulations. It's the single largest law text in our fair nation, and is a factor 5:1 to the core rulebooks of DND.
There's a core difference: Looking up §56 is not the same as understanding how class mechanics synergize when multi classing (as an example). But I see no reason to expect that GPT should be able to handle the rules in general.
I could be wrong, though. I've had a few simple - but largely succesful - games with GPT as game master. But they were a fun exercise for my 5yo, not a full group of adults.
Blanket disclaimer: I only ever state opinion. But I can sound terribly dogmatic - so if you feel I'm trying to tell you what to think, I'm really not, I swear. I'm telling you what I think, that's all.
I say this not wishing malice or to appear horrible, but having been a teacher and seeing how most people don't ever receive this advice - learn how to use a book.
If you know how to properly use a table of contents, index, and glossary trust when I say that with a physical book you can find what you're looking for in less time that it takes to type your query into a chatbot. Sadly WotC are silly enough not to offer PDFs, but again if digital books are your thing, the same tools exist in well designed PDFs especially combined with a good CTRL+F search. There really is never a use case for someone like myself who has learnt how to quickly reference and utilise books where there is a faster solution. The only times I ever struggle are when the writers and publishers have designed their product poorly. Monsters of the Multiverse, and the new 5.5e books are great examples of poor design. An index when used correctly is a powerful feature of a book. I can get what I need from 2014 Monster Manual 2x faster than MotM due to the correct indexing. In fact the very fact that the 5.5e Monster Manual is designed in the way is has been is testament to the very fact that people don't seem to have learnt the purpose of an index any more. However, the very fact that the new book is alphabetical also makes it a cinch to locate what I'm looking for far quicker than speaking or typing my query into a glorified chatbot. So again, and I don't mean malice here - learn how to use an actual book and it'll speed up your ability to use them.
LLMs are akin to NFTs and 3D Films, and VR - they're a fad upon which a bubble has been built. That bubble is bursting given the inherent costs needed to scale and make the techs economically viable. I've seen too many tech bubbles burst and lost my first business due to such a bubble so I can recognise them. Please don't lean to heavily on a tech that will become useless in a few years' time.
DM session planning template - My version of maps for 'Lost Mine of Phandelver' - Send your party to The Circus - Other DM Resources - Maps, Tokens, Quests - 'Better' Player Character Injury Tables?
Actor, Writer, Director & Teacher by day - GM/DM in my off hours.
Agile,
Regarding LLM's: They're actually not bad, as Acromos noted. "They can't do logic'. Sure, but they have auxiliary modules which CAN do logic, which are often integrated seamlessly. For D&D: they're far, far from good, but can provide good feedback. It's an great tool for new and experienced players to understand rules. It's an excellent thought partner for adjudication on my end. While AI's very often steer my players wrong and frequently lies, it can often provide great feedback. Anything that helps my players understand the game, I support---less questions I need to answer.
And it's not just about rules--it's about finding resources and direct links to content over the topics I'm looking at. I think the greatest step forward for LLM's currently would just be them to be able to say "I don't know."
Why am I looking up rules mid-game?
I never said I was. But it would be useful for players between turns to help understand their classes and abilities. They are already using AI to help understand what they're doing and AI often steers them wrong (but often right).
Martin,
Thank you for your note that highlights that I should learn how to use a book better. I am a teacher----and the thing is I never really did learn how to use a book, and what you're saying just highlights my existing trauma. Books are slow and manual labor intensive--there is no search that you'll conduct in a book that will outpace digital tools. The issue you describe with poor design exist only because of your methodology. Learning how to type faster is more useful than learning how to use a glossary. If you have to open a book, you're already behind the times. If you have teaching experience, it's possible you may recall that US teaching standards used to include learning to use a card catalog--but standards evolve.
"LLM's are a fad."
This reads like you don't know enough about this topic. Token costs are dropping 10x per year. Altman has already stated that they're at 150x less cost for similar quality between GPT revisions. Then on top of that, hardware is generationally increasing efficiency. Thousands of positions are ALREADY being eliminated by AI replacing them because it is more economically viable. There's loads of data on this. But this is beside the topic.
Answering the original question.
It doesn't sound like a resource like this exists yet. It certainly could be done--maybe that's the way of the future for niche-content. They'll get a group of thousands of DM's who will provide feedback such to the point that a model can make accurate predictions.
You don't want an LLM, you want a search engine. LLMs are about producing answer similar to what other people have said, and that's not what you want -- you want to ask a natural language query and get the specific rule or rules covering your question.
Now, there certainly are AI search engines, but I think they're still reliant on crowdsourcing methods (index a bunch of places where people ask the question and get answers; the most popular answers are presumed true) and as such, reliant on big data -- you can't just toss a book into the hopper, you need to toss a million people's answers to questions into the hopper as well. None of this is impossible, but I don't think anyone actually has the data they need.
AI has little impact on worker productivity : https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity
And they make people less connected to and care less about their work: https://arxiv.org/abs/2506.08872
Absolutely lots of companies are "reducing costs" by laying off people and telling the remaining few to just use AI to replace them. However, most of these are in minimum-wage & outsourced jobs that companies don't care about. It's your customer service, your telemarketers, your help lines, etc... Even the modern sophisicated LLMs can't even run a vending machine : https://www.anthropic.com/research/project-vend-1
Current AI/LLM are nothing more than "Dumb" AI's designed to automate specific tasks by extrapolating from a source model that gets fed to them to draw conclusions. It's frustrating when more often than not I have to go and rewrite the AI-slop that my team puts out because it doesn't meet production standards at work since the LLM cannot logically apply them. You're not gonna find what you're looking for here because AI can't do what you want. Also, AI in any kind of official capacity is a controversial subject in the TTRPG community; just look at what happened with SpellJammer and AI art.
Bigby's was the only confirmed usage.
There was no usage in Spelljammer.
If your source is Indestructoboy, he later retracted it, & was forced to admit he used a tool that used AI to detect AI in the first place.
If your source is Dungeons & Discourse. Professsor DM or a similar channel, these channels tend to have ulterior motives and/or are about milking drama, and likely sourced Indestructoboy's lie.
If your source is Reddit, false AI flags happen all the day there. One of my best friends got temp banned from a subreddit for allegedly using AI to make a post....he's autistic, & the accuser & moderator wouldn't hear him out because of Reddit Detectives' inability to admit that they're wrong.
DM, player & homebrewer(Current homebrew project is an unofficial conversion of SBURB/SGRUB from Homestuck into DND 5e)
Once made Maxwell's Silver Hammer come down upon Strahd's head to make sure he was dead.
Always study & sharpen philosophical razors. They save a lot of trouble.
Sorry, I meant Bigby's; got the name wrong. I don't watch either of those channels you mentioned because of the same reason you said. I just wanted to highlight how that's a touchy subject