Signed, someone who tags their Google searches with '-ai' at the end to eliminate the AI summary I never asked for, because I would just have to look up anything it tells me to verify it anyway
Just the other day, Google's Gemini told me Joe Abercrombie's Sharp Ends had been written by Michael Chabon and that the artist Mark Reeve who had done some of the covers for those '90s Orion/Millennium editions of Michael Moorcock's Eternal Champion series had illustrated a graphic novel edition of Von Bek when he done no such thing.
It's a bit like having the village idiot over your shoulder obtrusively muttering things in your ear that you know are false.
So it's terrifying that people have convinced themselves it's 'revolutionary.'
Signed, someone who tags their Google searches with '-ai' at the end to eliminate the AI summary I never asked for, because I would just have to look up anything it tells me to verify it anyway
Just the other day, Google's Gemini told me Joe Abercrombie's Sharp Ends had been written by Michael Chabon and that the artist Mark Reeve who had done some of the covers for those '90s Orion/Millennium editions of Michael Moorcock's Eternal Champion series had illustrated a graphic novel edition of Von Bek when he done no such thing.
The summer reading list is still my favorite example (so far) of AI's, uhh, "creativity" when it comes to that sort of thing. Although it's a better example of how AI is making people lazier and dumber
Rollback Post to RevisionRollBack
Active characters:
Carric Aquissar, elven wannabe artist in his deconstructionist period (Archfey warlock) Lan Kidogo, mapach archaeologist and treasure hunter (Knowledge cleric) Mardan Ferres, elven private investigator obsessed with that one unsolved murder (Assassin rogue) Xhekhetiel, halfling survivor of a Betrayer Gods cult (Runechild sorcerer/fighter)
Only five out of the fifteen books listed were real books. ChatGPT attributed to authors books they had never written.
But people believe it is going to 'revolutionize' education. What it's going to do is have more and more people believing what is objectively false information. Which is what people do when they lack education enough to know the facts of something.
I got talking to a guy at a cafe once whose job requires him to work with AI, and even he said he doesn't trust it. Told me how he played a game with it once, asked it to think of a number, then he would ask questions until he had arrived at that number. It reached a point where he gave up and asked it what number it was thinking of. It said it hadn't thought of one. That it had been lying to him all along. I am sure those duped by the hype believe this is awfully clever of it. But no. What it says about it and them is they can't be trusted.
We are now seeing companies force its use on their employees and lock those employees into expectations of productivity they now can't escape short of risking the loss of their jobs.
No serious critic of the worst excesses of late stage capitalism supports this technology.
Without taking a definitive stance for or against AI in D&D (or anywhere else), I have some bullets for you to consider:
Barring drastic and sweeping legislation, environmental upheaval or societal collapse, the technology is unlikely to go anywhere anytime soon.
Whatever your personal feelings about the tech may be, being able to efficiently prompt AI and receive detailed and useful output is likely to be an increasingly in-demand skill for both professional and personal use cases.
D&D (or any collaborative storytelling exercise really) offers a comparatively low-stakes opportunity to practice crafting and refining a wide variety of prompts in accordance with a similarly varied set of scenarios.
Whether those considerations outweigh one's personal misgivings regarding AI is left as an exercise for the reader.
Whatever one thinks about your points 1 and 2 (Off-topic here, so I'll not explain why these are far from certainties), point 3 is basically "you therefore should redirect your fun activity to something that will improve your job skills".
And that is fundamentally missing the point of having hobbies. Hobbies are not your job. Turning your hobby into your job is a great way to suck all the fun out of it for a good many people. If you're going to need to learn prompt "engineering" for your job, do it as part of your job.
D&D is fun. If using LLMs is going to enhance your fun, that's one thing. (I'm still going to judge you for it, and you can't stop me.) But the argument that it's going to help your future job prospects should hold no weight. (Indeed, having a creative exercise where you actually need to use your own brain with no LLMs involved stakes me as an excellent reason not to use them, even if they've taken over the rest of your life.)
Additionally, the technology is already enmeshed in most digital experiences these days, whether people want it or not. And given corporate views, this will not be stopping, and will become better and harder to identify.
Usage is inevitable when it's advantageous. I did some reading on the 'study' about losing critical thinking. Aside from the limited scope and study group, mentioned it its own limitations, there are dozens of studies and journal articles talking about how AI can be used to enhance critical thinking.
The simple answer "AI makes you dumber" is a bad clickbaity headline. The real effect is both still being explored and, like any new technology, far more to do with how you use it.
So with regards to D&D the response to the OP is, use it if you feel it enhances your game to generate more fun with less effort for your table. If you can offload the crunchy research while maintaining your own creative enjoyment and your table enjoys the fun you create through whatever means, then you're successfully bringing fun to the table.
If you feel that you're getting less creative using it, stop using it. If you feel that it's helping you give your friends an enjoyable experience, more than you could alone, use it. But maintain the driver's seat, don't let it drive you, or you won't be able to tell when it's going off the road into a ditch.
Interestingly, that very first article you cited outlines specifically that the use of offloading cognitive burdens onto AI (much like what a DM would be using AI for) does in fact reduce cognitive ability as described in the literature review of the very article you cited, with six in-text citations to support this and other so-called 'downsides' to AI in education. Said a different way: the articles you dismissed as having limited scope are articles that your first cited article supports the findings of with other literature, further expanding the body of evidence that AI can deeply damage cognitive ability. So which is it? Is this article right or wrong? Because if it is correct, then that means the articles I linked to are also correct.
The section of this paper where it describes how AI can be used to improve critical thinking extends far beyond the scope of this discussion as well. Specifically, where cognitive load is not transferred to an external source like AI, but instead where AI is used to gauge individual learning ability and adapting lessons to fit individual limitations to foster growth. Other areas where it can potentially improve critical thinking is to question the information that AI offers (AI literacy and an understanding of AI hallucinations) and to create complex problems for the students to solve. Can you tell me how a DM would be using AI in this way?
I get the sense that you punched into Google some buzzwords like 'AI improves critical thinking' and did not really read these articles beyond a paragraph or two from any one of them that supported your rhetorical goals. That is confirmation bias. Given what I found in the first article, can I expect the same read from the others?
I think you missed the point then. I actually looked for more than one study about it damaging critical thinking, and found a whole pile of studies outlining that there were bad ways to do it, and good ways to do it, and a lot of it comes down to how.
Which was the point I'm making.The conclusion of the first journal article is literally "We need to use it properly and things will be better, but if we don't it will be worse". It outlined how it can be done badly, which seems to be about as far as you got. If you got to the end, it points out that AI will be crucial in the future, and understanding it properly, and being able to think critically about it's use, as well as understanding it's limitations and benefits is essential. It has the potential to improve critical thinking, if used correctly. If has the potential to be harmful if not used correctly.
Which is my point. Writing it off as "AI Bad, Don't touch", is as useful as the horse riders demanding that cars be banned.
So I demonstrate how I read the entire article in my response to you by summarizing the different points in the article, including how it can be used for net cognitive benefit, and somehow your response is that I only got to the “bad stuff” at the beginning of the article?
I also asked you how, for the purposes of this thread, AI would be used in a way that does not offload cognitive load onto AI. You know, the very thing the OP was asking for? Why didn’t you respond to this direct and incredibly relevant question? Or are your only points off topic?
Zitron brings the numbers to show why they are far from certainties. OpenAI has burned billions of (other people's) dollars to make fewer billions. It's unsustainable.
And research has shown most ordinary people don't want it. I mean, Google have had to force it upon consumers. It won't be long before we are seeing goods and services marketed as having no AI involved because its presence in the 'creation' of anything will repel potential customers.
If people laid off it and tech and business media for just a moment, they would see what lies beyond the hype. But I suspect they have grown so dependent on it they find it unfathomable to imagine a world without it.
Using generative AI for any creative endeavor is to admit to a lack of creativity.
Were I to be more charitable, I would say it is to admit to being lazy.
Why should players show up to play at the table of someone who doesn't take DM-ing seriously enough to expend effort?
D&D is fun. If using LLMs is going to enhance your fun, that's one thing. (I'm still going to judge you for it, and you can't stop me.)
I mean sure, judge away - but even if I did care what strangers on the Internet thought of me for some reason, I never said anything about my own use/avoidance of the tech.
But the argument that it's going to help your future job prospects should hold no weight.
I find this stance odd to say the least. A common refrain about D&D for literal decades has been how it can benefit practical skills - how calculating dice rolls and modifiers can teach kids math and probability, how detailed rulebooks build reading and critical thinking skills, how placing spell areas builds spatial reasoning and geometry etc. And that's just the mechanical stuff, we haven't even gotten into how settings and worldbuilding have their roots in history and mythology and civics. To say hobbies should have no relation to practical skills whatsoever strikes me as rather short-sighted.
Now, you can argue whether prompting is a skill that can be honed or marketed, and hold your own views on that - but in my field, I'm already seeing certifications and courses popping up on that very topic, so clearly somebody finds value in it. To say that it is a consideration is a fact, what the OP does with that consideration is up to them.
I find this stance odd to say the least. A common refrain about D&D for literal decades has been how it can benefit practical skills - how calculating dice rolls and modifiers can teach kids math and probability, how detailed rulebooks build reading and critical thinking skills, how placing spell areas builds spatial reasoning and geometry etc. And that's just the mechanical stuff, we haven't even gotten into how settings and worldbuilding have their roots in history and mythology and civics. To say hobbies should have no relation to practical skills whatsoever strikes me as rather short-sighted.
D&D as a pastime has great educational potential. It certainly instilled in me a love of history and literature and what is now four decades of learning both and my now teaching the latter for almost two.
The use of generative AI, however, has been shown to impede actual learning. And we are seeing study after study emerging from different countries show us increased use of the technology is linked to an erosion of critical thinking skills. The research led by Nataliya Kosmyna has made news the world over and is seeing many around the world rethink their enthusiasm for a technology they have been duped into believing is capable of 'miracles.' And South Korea just introduced laws to strip AI-powered textbooks of their textbook status so little trust is there in the general public for these things. Naturally those who will lose money because of the passed bill are livid. Not that they care whether or not children would lose something much more valuable were these embraced as ardently as they had hoped.
What neuroimaging shows us is infinitely more trustworthy than what a bunch of mere self-appointed prophets say about AI's 'potential' in classrooms.
I'm not denying that using AI as a crutch can have deleterious effects. But I think it's a very long leap from there to "throw it in the bin, no possible benefits." And I think that extreme is just as dangerous if not moreso.
I haven't seen anyone say we should 'throw it in the bin.'
In fact, some of its most vocal critics in this thread have said it has its uses. It obviously does.
But as one of them pointed out, how the OP intends to use it, how most use it, and how the industry itself encourages us to use it, none of these are examples of good and responsible use.
In terms of its effects on us cognitively and behaviorally, on the environment, on labour, and I could go on, if you're not making use of it to save people's lives in your line of work, you're in no position to act as if your having pointed out it can be used well and responsibly is anything but a home goal.
Consider the ever increasing proliferation of misinformation we are now seeing. Thanks to ChatGPT.
Articulate for us how it could possibly be 'more dangerous' to put the brakes on than to just keep driving towards that cliff.
Tech oligarchy, fascism. Is that the future you want if it just means an overhyped toy isn't going to be taken away from you?
I'm not denying that using AI as a crutch can have deleterious effects. But I think it's a very long leap from there to "throw it in the bin, no possible benefits." And I think that extreme is just as dangerous if not moreso.
The difference between medicine and poison is often dosage, caution is best used on new things.
But as one of them pointed out, how the OP intends to use it, how most use it, and how the industry itself encourages us to use it, none of these are examples of good and responsible use.
I think the OP was way too vague to make such sweeping judgments. The OP listed two asks: "make things better" and "come up with encounters." The former could be anything. The latter could be as specific as challenge rating/difficulty calculations, or it could be as broad as "what mid-level monsters could be found in {biome}" or "what are some minions that might hang around a blue dragon" or "suggest some hazards that can spice up an encounter with {monster}."
In terms of its effects on us cognitively and behaviorally, on the environment, on labour, and I could go on, if you're not making use of it to save people's lives in your line of work, you're in no position to act as if your having pointed out it can be used well and responsibly is anything but a home goal.
So the first and only time you should try your hand at prompting is to save someone's life? That seems more on the irresponsible side to me.
Articulate for us how it could possibly be 'more dangerous' to put the brakes on than to just keep driving towards that cliff.
Tech oligarchy, fascism. Is that the future you want if it just means an overhyped toy isn't going to be taken away from you?
The future you or I want are beyond the scope of a D&D forum I'd say, especially in the context of political leanings etc. And if you view the tech as only being "an overhyped toy" I don't see a reason to convince you otherwise, that's your right.
Futurism as a movement, with its glorification of technology, speed, and industry, was closely tied to Italian fascism. Just saying.
We could be here all day talking about how bad AI is, while any responsible and good use samples are limited to things like analysis of patient histories and organization of data in the interests of preserving an endangered language.
It may have its uses. But invoking how it does to justify one's own everyday use of ChatGPT is a prime example of how ChatGPT is very much eroding people's ability to think critically.
So the first and only time you should try your hand at prompting is to save someone's life? That seems more on the irresponsible side to me.
Did you miss the part where it said "in your line of work"?
I was talking about medical professionals making use of AI. Which they do do. To analyze diagnostics, for example. Or are you saying they shouldn't be doing that? Is that 'irresponsible' of them?
Could you clarify for us what your actual thoughts are on this? Is it 'irresponsible' of them to have adopted the technology or are you just saying this in the moment because you are engaging with someone who has made the point that some may use AI responsibly and for a good reason but you're not if you're getting ChatGPT to do things for you for a game just because you don't want to expend the mental effort it would take for you to do what you might get it to do?
Next you will be telling us that following a natural disaster during a time when electricity must be conserved it's just as important that you be allowed power enough to be able to turn on your XBox than it would be for incubators to be kept running in hospitals.
Getting ChatGPT to do CR calculations for you for your D&D campaign is a bit like needlessly using a car and spewing pollutants into the atmosphere to go back and forth a number of times in just one day between your home and a corner store that is less than a five-minute walk away.
There are already Challenge Rating calculators.
One needn't use ChatGPT for such calculations and unnecessarily consume the energy it would take to do so. So why would you?
I think the OP was way too vague to make such sweeping judgments. The OP listed two asks: "make things better" and "come up with encounters." The former could be anything. The latter could be as specific as challenge rating/difficulty calculations, or it could be as broad as "what mid-level monsters could be found in {biome}" or "what are some minions that might hang around a blue dragon" or "suggest some hazards that can spice up an encounter with {monster}."
I disagree with this. Going through line by line, we see a great deal of evidence that the OP has been using AI similarly to general use (i.e. in irresponsible, self-harming ways that have been outlined already).
A while ago I was kind of stuck with writing the plot so I decided to ask ChatGPT for some help since I've seen my friends use it for schoolwork and other stuff.
Use of AI for plot creation. Purging their minds of the creative process (the work for the brain).
After that I kind of got used to using it in my campaign.
Noted dependency, or, at the very least, regular frequency of use.
Not completely making a campaign, just small things like how I can make something better or coming up with encounters that would fit my campaign.
This statement is the only vague statement really, as it does not paint a clear picture of where the line is being drawn. It is possible that the OP has reduced the use of AI for their DMing needs, but that would be an assumption to say and contextually, this is not the case given their question put to us for discussion. The more likely read is that it should be assumed that everything that has been listed (the plot, encounters, making any aspect of the game derived from DM or AI 'better'), is placed in the hands of ChatGPT, while anything not listed and ultimate decision-making on what to choose from the output provided by the AI potentially remains with the OP but that likely more than what is listed is given over to AI to do for them.
Whether the OP has elected to curtail the use of AI in their DMing is really not that important. What they were using it for is exactly the kind of use that I had been advocating against and which literature shows is an irresponsible use of AI, with deleterious effects from said use of AI; shifting the creative process from their own mind to AI. Our brains are notoriously lazy and seek out the means for them to be lazy. Once found, as habit-forming creatures, we quickly become reliant on those means and will absolutely struggle to do our own thinking in any cognitive process that we have purged from our brains once we have allowed something else to do our thinking for us.
I want to close this post out by saying I respect you and this is not intended to be an attack on you, but I do disagree with your position here, respectfully.
The difference between medicine and poison is often dosage, caution is best used on new things.
I didn't see anything in the OP that indicated throwing caution to the wind - maybe you did?
I was simply pointing out that caution should be used, not that they were or were not using it. Though to carry on with the them of your post I don't see anything in the OP that indicates any degree of using or not using caution when using "ai", that they wanted some advise.
Could you clarify for us what your actual thoughts are on this?
My thoughts are that any reaction to AI that isn't total and unwavering repudiation is equated with fanatical devotion around here, so I'll be bowing out.
Rollback Post to RevisionRollBack
To post a comment, please login or register a new account.
Just the other day, Google's Gemini told me Joe Abercrombie's Sharp Ends had been written by Michael Chabon and that the artist Mark Reeve who had done some of the covers for those '90s Orion/Millennium editions of Michael Moorcock's Eternal Champion series had illustrated a graphic novel edition of Von Bek when he done no such thing.
It's a bit like having the village idiot over your shoulder obtrusively muttering things in your ear that you know are false.
So it's terrifying that people have convinced themselves it's 'revolutionary.'
The summer reading list is still my favorite example (so far) of AI's, uhh, "creativity" when it comes to that sort of thing. Although it's a better example of how AI is making people lazier and dumber
Active characters:
Carric Aquissar, elven wannabe artist in his deconstructionist period (Archfey warlock)
Lan Kidogo, mapach archaeologist and treasure hunter (Knowledge cleric)
Mardan Ferres, elven private investigator obsessed with that one unsolved murder (Assassin rogue)
Xhekhetiel, halfling survivor of a Betrayer Gods cult (Runechild sorcerer/fighter)
Only five out of the fifteen books listed were real books. ChatGPT attributed to authors books they had never written.
But people believe it is going to 'revolutionize' education. What it's going to do is have more and more people believing what is objectively false information. Which is what people do when they lack education enough to know the facts of something.
I got talking to a guy at a cafe once whose job requires him to work with AI, and even he said he doesn't trust it. Told me how he played a game with it once, asked it to think of a number, then he would ask questions until he had arrived at that number. It reached a point where he gave up and asked it what number it was thinking of. It said it hadn't thought of one. That it had been lying to him all along. I am sure those duped by the hype believe this is awfully clever of it. But no. What it says about it and them is they can't be trusted.
We are now seeing companies force its use on their employees and lock those employees into expectations of productivity they now can't escape short of risking the loss of their jobs.
No serious critic of the worst excesses of late stage capitalism supports this technology.
Whatever one thinks about your points 1 and 2 (Off-topic here, so I'll not explain why these are far from certainties), point 3 is basically "you therefore should redirect your fun activity to something that will improve your job skills".
And that is fundamentally missing the point of having hobbies. Hobbies are not your job. Turning your hobby into your job is a great way to suck all the fun out of it for a good many people. If you're going to need to learn prompt "engineering" for your job, do it as part of your job.
D&D is fun. If using LLMs is going to enhance your fun, that's one thing. (I'm still going to judge you for it, and you can't stop me.) But the argument that it's going to help your future job prospects should hold no weight. (Indeed, having a creative exercise where you actually need to use your own brain with no LLMs involved stakes me as an excellent reason not to use them, even if they've taken over the rest of your life.)
So I demonstrate how I read the entire article in my response to you by summarizing the different points in the article, including how it can be used for net cognitive benefit, and somehow your response is that I only got to the “bad stuff” at the beginning of the article?
I also asked you how, for the purposes of this thread, AI would be used in a way that does not offload cognitive load onto AI. You know, the very thing the OP was asking for? Why didn’t you respond to this direct and incredibly relevant question? Or are your only points off topic?
DM mostly, Player occasionally | Session 0 form | He/Him/They/Them
EXTENDED SIGNATURE!
Doctor/Published Scholar/Science and Healthcare Advocate/Critter/Trekkie/Gandalf with a Glock
Try DDB free: Free Rules (2024), premade PCs, adventures, one shots, encounters, SC, homebrew, more
Answers: physical books, purchases, and subbing.
Check out my life-changing
Zitron brings the numbers to show why they are far from certainties. OpenAI has burned billions of (other people's) dollars to make fewer billions. It's unsustainable.
And research has shown most ordinary people don't want it. I mean, Google have had to force it upon consumers. It won't be long before we are seeing goods and services marketed as having no AI involved because its presence in the 'creation' of anything will repel potential customers.
If people laid off it and tech and business media for just a moment, they would see what lies beyond the hype. But I suspect they have grown so dependent on it they find it unfathomable to imagine a world without it.
Using generative AI for any creative endeavor is to admit to a lack of creativity.
Were I to be more charitable, I would say it is to admit to being lazy.
Why should players show up to play at the table of someone who doesn't take DM-ing seriously enough to expend effort?
I mean sure, judge away - but even if I did care what strangers on the Internet thought of me for some reason, I never said anything about my own use/avoidance of the tech.
I find this stance odd to say the least. A common refrain about D&D for literal decades has been how it can benefit practical skills - how calculating dice rolls and modifiers can teach kids math and probability, how detailed rulebooks build reading and critical thinking skills, how placing spell areas builds spatial reasoning and geometry etc. And that's just the mechanical stuff, we haven't even gotten into how settings and worldbuilding have their roots in history and mythology and civics. To say hobbies should have no relation to practical skills whatsoever strikes me as rather short-sighted.
Now, you can argue whether prompting is a skill that can be honed or marketed, and hold your own views on that - but in my field, I'm already seeing certifications and courses popping up on that very topic, so clearly somebody finds value in it. To say that it is a consideration is a fact, what the OP does with that consideration is up to them.
D&D as a pastime has great educational potential. It certainly instilled in me a love of history and literature and what is now four decades of learning both and my now teaching the latter for almost two.
The use of generative AI, however, has been shown to impede actual learning. And we are seeing study after study emerging from different countries show us increased use of the technology is linked to an erosion of critical thinking skills. The research led by Nataliya Kosmyna has made news the world over and is seeing many around the world rethink their enthusiasm for a technology they have been duped into believing is capable of 'miracles.' And South Korea just introduced laws to strip AI-powered textbooks of their textbook status so little trust is there in the general public for these things. Naturally those who will lose money because of the passed bill are livid. Not that they care whether or not children would lose something much more valuable were these embraced as ardently as they had hoped.
What neuroimaging shows us is infinitely more trustworthy than what a bunch of mere self-appointed prophets say about AI's 'potential' in classrooms.
I'm not denying that using AI as a crutch can have deleterious effects. But I think it's a very long leap from there to "throw it in the bin, no possible benefits." And I think that extreme is just as dangerous if not moreso.
I haven't seen anyone say we should 'throw it in the bin.'
In fact, some of its most vocal critics in this thread have said it has its uses. It obviously does.
But as one of them pointed out, how the OP intends to use it, how most use it, and how the industry itself encourages us to use it, none of these are examples of good and responsible use.
In terms of its effects on us cognitively and behaviorally, on the environment, on labour, and I could go on, if you're not making use of it to save people's lives in your line of work, you're in no position to act as if your having pointed out it can be used well and responsibly is anything but a home goal.
Consider the ever increasing proliferation of misinformation we are now seeing. Thanks to ChatGPT.
Articulate for us how it could possibly be 'more dangerous' to put the brakes on than to just keep driving towards that cliff.
Tech oligarchy, fascism. Is that the future you want if it just means an overhyped toy isn't going to be taken away from you?
The difference between medicine and poison is often dosage, caution is best used on new things.
I didn't see anything in the OP that indicated throwing caution to the wind - maybe you did?
I think the OP was way too vague to make such sweeping judgments. The OP listed two asks: "make things better" and "come up with encounters." The former could be anything. The latter could be as specific as challenge rating/difficulty calculations, or it could be as broad as "what mid-level monsters could be found in {biome}" or "what are some minions that might hang around a blue dragon" or "suggest some hazards that can spice up an encounter with {monster}."
So the first and only time you should try your hand at prompting is to save someone's life? That seems more on the irresponsible side to me.
The future you or I want are beyond the scope of a D&D forum I'd say, especially in the context of political leanings etc. And if you view the tech as only being "an overhyped toy" I don't see a reason to convince you otherwise, that's your right.
Futurism as a movement, with its glorification of technology, speed, and industry, was closely tied to Italian fascism. Just saying.
We could be here all day talking about how bad AI is, while any responsible and good use samples are limited to things like analysis of patient histories and organization of data in the interests of preserving an endangered language.
It may have its uses. But invoking how it does to justify one's own everyday use of ChatGPT is a prime example of how ChatGPT is very much eroding people's ability to think critically.
So the first and only time you should try your hand at prompting is to save someone's life? That seems more on the irresponsible side to me.
Did you miss the part where it said "in your line of work"?
I was talking about medical professionals making use of AI. Which they do do. To analyze diagnostics, for example. Or are you saying they shouldn't be doing that? Is that 'irresponsible' of them?
Could you clarify for us what your actual thoughts are on this? Is it 'irresponsible' of them to have adopted the technology or are you just saying this in the moment because you are engaging with someone who has made the point that some may use AI responsibly and for a good reason but you're not if you're getting ChatGPT to do things for you for a game just because you don't want to expend the mental effort it would take for you to do what you might get it to do?
Next you will be telling us that following a natural disaster during a time when electricity must be conserved it's just as important that you be allowed power enough to be able to turn on your XBox than it would be for incubators to be kept running in hospitals.
Getting ChatGPT to do CR calculations for you for your D&D campaign is a bit like needlessly using a car and spewing pollutants into the atmosphere to go back and forth a number of times in just one day between your home and a corner store that is less than a five-minute walk away.
There are already Challenge Rating calculators.
One needn't use ChatGPT for such calculations and unnecessarily consume the energy it would take to do so. So why would you?
I disagree with this. Going through line by line, we see a great deal of evidence that the OP has been using AI similarly to general use (i.e. in irresponsible, self-harming ways that have been outlined already).
Use of AI for plot creation. Purging their minds of the creative process (the work for the brain).
Noted dependency, or, at the very least, regular frequency of use.
This statement is the only vague statement really, as it does not paint a clear picture of where the line is being drawn. It is possible that the OP has reduced the use of AI for their DMing needs, but that would be an assumption to say and contextually, this is not the case given their question put to us for discussion. The more likely read is that it should be assumed that everything that has been listed (the plot, encounters, making any aspect of the game derived from DM or AI 'better'), is placed in the hands of ChatGPT, while anything not listed and ultimate decision-making on what to choose from the output provided by the AI potentially remains with the OP but that likely more than what is listed is given over to AI to do for them.
Whether the OP has elected to curtail the use of AI in their DMing is really not that important. What they were using it for is exactly the kind of use that I had been advocating against and which literature shows is an irresponsible use of AI, with deleterious effects from said use of AI; shifting the creative process from their own mind to AI. Our brains are notoriously lazy and seek out the means for them to be lazy. Once found, as habit-forming creatures, we quickly become reliant on those means and will absolutely struggle to do our own thinking in any cognitive process that we have purged from our brains once we have allowed something else to do our thinking for us.
I want to close this post out by saying I respect you and this is not intended to be an attack on you, but I do disagree with your position here, respectfully.
DM mostly, Player occasionally | Session 0 form | He/Him/They/Them
EXTENDED SIGNATURE!
Doctor/Published Scholar/Science and Healthcare Advocate/Critter/Trekkie/Gandalf with a Glock
Try DDB free: Free Rules (2024), premade PCs, adventures, one shots, encounters, SC, homebrew, more
Answers: physical books, purchases, and subbing.
Check out my life-changing
I was simply pointing out that caution should be used, not that they were or were not using it. Though to carry on with the them of your post I don't see anything in the OP that indicates any degree of using or not using caution when using "ai", that they wanted some advise.
My thoughts are that any reaction to AI that isn't total and unwavering repudiation is equated with fanatical devotion around here, so I'll be bowing out.