Sure, but generative AI is what all the buzz is about. In the gaming space, better quality AIs than what video games actually use have been available for decades (the earliest application of AI to RPGs that I know of was using Eurisko to beat Trillion Credit Squadron, back in 1981), and people don't even bother.
Folks. Listen. Executives at cigarette companies don't smoke cigarettes. Executives for McDonald's don't eat at McDonald's. And executives at Hasbro don't play D&D. The only reason Hasbro bought D&D is to make money. Period. They saw an intellectual property that their analysts considered to be "under-capitalized", and they fully intend to capitalize on it. If that means using A.I., then that's what they'll do. If that means repackaging existing material (with a few little tweaks here and there) so they can justify charging you $60 a pop for a whole new line of books, then that's what they'll do. If that means micro-transactions, then that's what they'll do. If that means laying off the experienced creatives who build this game up from an obscure niche hobby to a global powerhouse, then that's what they'll do. All that matters to the C-Suite is the bottom line. It's not about "good vs evil", it's not about "characters vs monsters", it's only about "profit vs loss". A.I. is a genie that cannot be put back into the bottle. I don't like it, you may not like it, so all we can do is focus on our game at our table and try to preserve the things that made D&D great to begin with.
Thank you for attending my Ted Talk. Parking will not be validated.
Sustainability is irrelevant. Sustainability only matters if you're planning long term. Hasbro, like nearly every corporation in America today, has completely and utterly given up on long term planning. Every company today is running on a hedge fund mentality. All that matters is today's profit. They only care that this quarter's profit is higher than last quarter's profit. They only care that this year's sales are higher than last year's sales. Next quarter, or next year, won't matter to them until next quarter or next year. And if the market takes a dip before then, they will happily burn it all down, cash out, and invest that money in the next thing coming down the pike.
AI is a new toy. Executives are like children. They all want to play with the newest toy.
Sustainability is irrelevant. Sustainability only matters if you're planning long term. Hasbro, like nearly every corporation in America today, has completely and utterly given up on long term planning. Every company today is running on a hedge fund mentality. All that matters is today's profit. They only care that this quarter's profit is higher than last quarter's profit. They only care that this year's sales are higher than last year's sales. Next quarter, or next year, won't matter to them until next quarter or next year. And if the market takes a dip before then, they will happily burn it all down, cash out, and invest that money in the next thing coming down the pike.
AI is a new toy. Executives are like children. They all want to play with the newest toy.
This is true.
But my point about sustainability was about OpenAI.
Folks. Listen. Executives at cigarette companies don't smoke cigarettes. Executives for McDonald's don't eat at McDonald's. And executives at Hasbro don't play D&D. The only reason Hasbro bought D&D is to make money. Period. They saw an intellectual property that their analysts considered to be "under-capitalized", and they fully intend to capitalize on it. If that means using A.I., then that's what they'll do. If that means repackaging existing material (with a few little tweaks here and there) so they can justify charging you $60 a pop for a whole new line of books, then that's what they'll do. If that means micro-transactions, then that's what they'll do. If that means laying off the experienced creatives who build this game up from an obscure niche hobby to a global powerhouse, then that's what they'll do. All that matters to the C-Suite is the bottom line. It's not about "good vs evil", it's not about "characters vs monsters", it's only about "profit vs loss". A.I. is a genie that cannot be put back into the bottle. I don't like it, you may not like it, so all we can do is focus on our game at our table and try to preserve the things that made D&D great to begin with.
Thank you for attending my Ted Talk. Parking will not be validated.
I agree with much of what you have said there.
Except this: A.I. is a genie that cannot be put back into the bottle.
And why?
Because its development is not profitable.
OpenAI is pouring more money into the development of it than it is making from it. For every $2.35 they spend on it they make $1.
If it's all about 'profits versus loss' then the development of AI is just not sustainable.
Companies want to use it if profits matter more to them than anything else.
But there will come a point when OpenAI realize it's stupid to burn tens of billions of dollars.
And for what? So people with too much time on their hands can have it make memes for them?
These companies are predicting future revenue due to ai development.
These companies are predicting future revenue due to ai development.
Let's hope for their sake their crystal balls work better than does something like ChatGPT.
And in case you missed my answer to your question:
Because it takes a cult-like level of dissonance to one minute be talking about oh-how-so-much you 'care' about the environment and then the next to be cheering and applauding the use and development of AI. Talking endlessly about how it might provide solutions all the while turning a blind eye to what it is doing.
Talking about how you 'care' about working people knowing very well it is predicted to displace upwards of 300 million workers globally.
That is how members of cults 'think.'
Their actions are at direct odds with their claims.
At the educational institution for which I work the only people enthusiastic about AI are those who almost exclusively read bus lit and industry news. The rest of us who read widely remain highly skeptical and not just because of what we read. We have already been witness to utter failures in its use for what we do. But the enthusiasm remains and those in charge sit like emotionless robots and ignore any concerns about those failings.
These companies are predicting future revenue due to ai development.
Let's hope for their sake their crystal balls work better than does something like ChatGPT.
And in case you missed my answer to your question:
Because it takes a cult-like level of dissonance to one minute be talking about oh-how-so-much you 'care' about the environment and then the next to be cheering and applauding the use and development of AI. Talking endlessly about how it might provide solutions all the while turning a blind eye to what it is doing.
Talking about how you 'care' about working people knowing very well it is predicted to displace upwards of 300 million workers globally.
That is how members of cults 'think.'
Their actions are at direct odds with their claims.
At the educational institution for which I work the only people enthusiastic about AI are those who almost exclusively read bus lit and industry news. The rest of us who read widely remain highly skeptical and not just because of what we read. We have already been witness to utter failures in its use for what we do. But the enthusiasm remains and those in charge sit like emotionless robots and ignore any concerns about those failings.
That is the behavior of a cult.
I cannot speak for the mentalities of other people I know nothing about.
I'll give this thread a chance to get back on topic as we do understand this is a concerning issue for some of our players, but if it derails again into being nonconstructive and off-topic arguments, we will have to lock it. Please also keep in mind our rules:
6.1 • Company Criticism
Public criticism of Wizards of the Coast (Or Hasbro in this case) is not itself against the rules; however, users are still responsible for ensuring their conduct does not violate any other site rules.
Possible violations include, but are not limited to:
Threats of physical violence.
Doxxing (publicly sharing another person's private information).
Name-calling, insults, and other forms of harassment.
Swearing and other language not suitable for an all-ages audience.
And that the topic must stay related to D&D. We understand that it's natural to want to get into a discussion focusing mainly on AI itself, but please make sure you are relating that back to D&D. You can always suggest resources for further reading if it's going to go off topic. Likewise, try to make sure you're not going too deep into speculation without linking back to current evidence. We understand being concerned and sharing those concerns, but if a topic looks like it's only serving to generate toxicity and panic without trying to ground it in reality or have a goal in mind for discussion, the thread will have to be locked.
And if you find yourself constantly replying to one specific other user, consider if this is still about the topic or more about debating them and if you need to step back and disengage. If someone has broken the above rules, engaging with such is only going to escalate it, not solve it. We will not tolerate users harassing and disrespecting each other over this.
The most important takeaway from Cocks’ most recent set of comments: He is not saying anything new. At its core, he is reiterating the same basic points in all his prior commentary on AI’s future in D&D. That consistency is a good thing - it means their goal remains focused, tailored, and is not spiraling out of control as corporate AI decisions often do.
Most importantly, he still is not discussing using generative AI in the production of official D&D materials. As with every other interview, his vision for AI in D&D remains optional player and DM-facing generative tools. This means those who do not wish to interface with AI will not be required to do so - you can continue to support the AI-free base game without opting into AI generated elements.
Now, let us look at the products he is proposing. Nothing he proposes is novel - people already use AI to generate campaign art, assist with preparation, or even develop larger aspects of the campaign. What does that mean? There are lots of problems with AI - ethical, legal, environmental, etc. - but those problems exist regardless of who performs the generation AI. For environmental damage, the damage is going to be done regardless - folks are not going to stop using AI to assist with their games, so Wizards releasing their own product likely will result in little to no additional damage than already was to likely occur. The legal and ethical concerns are also less with a Wizards-owned product - at least they would be training their models on their own copyrighted materials, as opposed to a third-party flagrantly training an AI on Wizards’ property. A Wizards-produced generative AI might not be perfect - but there are a lot of ways it is better than the current situation.
Turning to the actual content Wizards is working on, there is no question generative AI can be a useful tool to DMs. It is no secret D&D struggles from a DMing shortage - five decades of efforts have resulted in little to no change in the fundamental fact only about 20% of players DM, and not all of them are DMing at all the times. A large portion of this is a perception problem - the amount of effort required to DM is more than being a player. Moreover, being a DM requires a lot of different skills and tasks - encounter design, storytelling, worldbuilding, dungeon design, etc. - not all of which everyone likes. I, for example, know some folks who enjoy the idea of building a world and telling a story in it… but they do not enjoy the mechanical elements required to make that work. Finally, there is the aspect of time management - even a DM who might have all the necessary skills might prefer to spend their time designing something like an epic dungeon… instead of a half dozen throw away NPCs the players might only exchange a few sentences with.
Those are all things generative AI can help with - reducing the barrier to entry for potential DMs by lightening the load and making up for their perceived shortcomings, as well as providing experienced DMs a tool to better focus their particularized skills. Those are longstanding problems decades of work cannot fix - which is exactly the kind of problem where it makes sense to apply generative AI. As Cocks talks about it, their goal is less to create a full experience that sidesteps the human element - their goal is to create a set of tools to augment and support the human element.
All of that is greatly improved if the genAI is trained by Wizards and for Wizards game - it helps keep the AI focused and providing helpful, tailored results, resulting in a better, more useful product.
Now, I am sure there are folks who are upset because some of the human element is being replaced - but the reality? We have been replacing that human element for five decades now - replacing it with “I just will not DM.” Given the DM shortage, I am hard pressed to say “human DM with AI tools” is a worse situation than no DM at all, or people who want to DM being unable to do so because they lack time or a mind for some aspects of DMing.
Overall, am I happy we live in a world where generative AI holds all kinds of problems? No. Would I use these tools? Perhaps for art - my drawing skills leave much to be desired - but not for other content. Am I going to begrudge Wizards for reading reality and realizing “someone will do this, it might as well be us?” Of course not, particularly since it adds no additional harm and as they remain committed to it being an optional player-facing tool and not an element of the game’s code design.
Livermore labs recently managed to output more energy through nuclear fusion than was put into the system.
That has nothing to do with AI*, and "maybe fusion is actually going to work this time" doesn't solve the energy problems, since even if it works, it's years away from production, and decades from ubiquity.
* I suppose it's possible that machine learning was involved in some way, but that's not what people are talking about when they say "AI" these days. It's also an actually useful use of the tech, unlike generative systems.
It may not be what _you_ are talking about when you talk about AI, but it is clearly what other people mean.
In a conversation about generative AI, references to "AI" may be assumed to be about generative.
Furthermore, generative systems are what's being discussed in the vast majority of the public conversation on the topic.
In either event, attributing successes of machine learning to "AI" in a conversation about generative systems is misdirection. Most machine learning systems are specialized to task, and do things that the LLM chatbots everybody's talking about can't do reliably, if at all. ChatGPT cannot design or operate a fusion reactor, and never will be able to, and if anybody's gonna try, I'd like some advance warning. :)
By the way, I guess I missed it, but when did you get your graduate degree in AI or any closely related field? My graduate work was in Systems Architecture and Engineering at Viterbi. It’d be nice to confirm that you actually have relevant graduate education in the field before you do anything as brash as telling people they are using the wrong language.
"Systems Architecture and Engineering" doesn't necessarily have anything to do with AI or machine learning. (And, since you didn't say you did research on the subject, I must assume you didn't.)
Anyway, credentialism is irrelevant to a conversation about what people in general are talking about when they say "AI". Conflating the different types of systems is the enemy of clarity.
Given the tendency of people who are actively working in the field to make wildly unsupportable claims about the tech, I think credentials might even be counterproductive to useful conversation. :)
I've iterated this previously, and every time I use something claiming to generate content by request using "AI", it reinforces the notion that AI doesn't understand nuance. The responses still get increasingly worse with the higher specificity of my requests.
It primarily looks for keywords and very little to do with grammar. (I can get a D&D backstory by just typing "dragonborn backstory".) Grammar makes all the difference to Humans. An AI might interpret a saying I like to say as the same sentence: "Do not what you'd regret doing. Do what you'd regret not doing." (I don't know if anyone said it before I did, but I've been saying this for a couple of decades at the least. Words to live by, I think they are.) Moving one word changed the nuance of the statements.
I remember when Chat-GPT couldn't count the number of Rs in Strawberry and in Raspberry and Bs in Bubble while correctly counting Ns in antennae just a year ago, but all the time, it would spell the words correctly. I narrowed it down to what seemed like parsing errors. I managed to get it to tell me how many Rs are between the E and Y, and it told me there was one R even though it stated that the E is in the 7th position and the Y is 10th position and R was the only letter between them.
Reliance on AI without close monitoring is going to return foundational errors, even for something as simple as asking AI to verify something that's obvious to people.
Shackling the AI to prevent those errors is a crude, brutal hack and not an actual solution to the issues, but that's what has been happening more and more lately. It's easier to just tell the application to not do something than to try to figure out why it did it in the first place and see how to organically train it out of that response. We've come across more than a few instances when shackling AI changed the problem from one thing to another equal but opposite problem.
For the services that tap into Internet searches, the common generative AI doesn't know the difference between fact, a rumor, an opinion, and a lie.
AI is a tool still in its infancy with coarse safety measures in place that put blinders on it. Be careful in its use. Keep both hands on the AI wheel at all times.
Swing a hammer without looking and someone's finger is going to get smashed.
(I will state this. Microsoft's AI is actively avoiding content that is not specifically licensed to Microsoft for its use or is not verified as free public use. Its art generation is limited for it, but at least, it's not digging up people's stuff without their permissions. I found this surprising after their irresponsible and disastrous release of the Tay chatbot on Twitter.)
Human. Male. Possibly. Don't be a divider. My characters' backgrounds are written like instruction manuals rather than stories. My opinion and preferences don't mean you're wrong. I am 99.7603% convinced that the digital dice are messing with me. I roll high when nobody's looking and low when anyone else can see.🎲 “It's a bit early to be thinking about an epitaph. No?” will be my epitaph.
To post a comment, please login or register a new account.
Sure, but generative AI is what all the buzz is about. In the gaming space, better quality AIs than what video games actually use have been available for decades (the earliest application of AI to RPGs that I know of was using Eurisko to beat Trillion Credit Squadron, back in 1981), and people don't even bother.
Folks. Listen. Executives at cigarette companies don't smoke cigarettes. Executives for McDonald's don't eat at McDonald's. And executives at Hasbro don't play D&D. The only reason Hasbro bought D&D is to make money. Period. They saw an intellectual property that their analysts considered to be "under-capitalized", and they fully intend to capitalize on it. If that means using A.I., then that's what they'll do. If that means repackaging existing material (with a few little tweaks here and there) so they can justify charging you $60 a pop for a whole new line of books, then that's what they'll do. If that means micro-transactions, then that's what they'll do. If that means laying off the experienced creatives who build this game up from an obscure niche hobby to a global powerhouse, then that's what they'll do. All that matters to the C-Suite is the bottom line. It's not about "good vs evil", it's not about "characters vs monsters", it's only about "profit vs loss". A.I. is a genie that cannot be put back into the bottle. I don't like it, you may not like it, so all we can do is focus on our game at our table and try to preserve the things that made D&D great to begin with.
Thank you for attending my Ted Talk. Parking will not be validated.
Anzio Faro. Protector Aasimar light cleric. Lvl 18.
Viktor Gavriil. White dragonborn grave cleric. Lvl 20.
Ikram Sahir ibn-Malik al-Sayyid Ra'ad. Brass dragonborn draconic sorcerer Lvl 9. Fire elemental devil.
Wrangler of cats.
Sustainability is irrelevant. Sustainability only matters if you're planning long term. Hasbro, like nearly every corporation in America today, has completely and utterly given up on long term planning. Every company today is running on a hedge fund mentality. All that matters is today's profit. They only care that this quarter's profit is higher than last quarter's profit. They only care that this year's sales are higher than last year's sales. Next quarter, or next year, won't matter to them until next quarter or next year. And if the market takes a dip before then, they will happily burn it all down, cash out, and invest that money in the next thing coming down the pike.
AI is a new toy. Executives are like children. They all want to play with the newest toy.
Anzio Faro. Protector Aasimar light cleric. Lvl 18.
Viktor Gavriil. White dragonborn grave cleric. Lvl 20.
Ikram Sahir ibn-Malik al-Sayyid Ra'ad. Brass dragonborn draconic sorcerer Lvl 9. Fire elemental devil.
Wrangler of cats.
This is true.
But my point about sustainability was about OpenAI.
'Today's profit' for them looks bad.
They lose money with every prompt.
These companies are predicting future revenue due to ai development.
Let's hope for their sake their crystal balls work better than does something like ChatGPT.
And in case you missed my answer to your question:
Because it takes a cult-like level of dissonance to one minute be talking about oh-how-so-much you 'care' about the environment and then the next to be cheering and applauding the use and development of AI. Talking endlessly about how it might provide solutions all the while turning a blind eye to what it is doing.
Talking about how you 'care' about working people knowing very well it is predicted to displace upwards of 300 million workers globally.
That is how members of cults 'think.'
Their actions are at direct odds with their claims.
At the educational institution for which I work the only people enthusiastic about AI are those who almost exclusively read bus lit and industry news. The rest of us who read widely remain highly skeptical and not just because of what we read. We have already been witness to utter failures in its use for what we do. But the enthusiasm remains and those in charge sit like emotionless robots and ignore any concerns about those failings.
That is the behavior of a cult.
The easy & pervasive activation of Hallucinations in genAi are proof that the tech is too flawed to be trustworthy.
It'll get better? Remember NFTs, The Metaverse, & most crypto not connected to people in power?
Those NEVER got better, despite the assurances by their backers.
& quoting Elon Musk is...questionable, to say the least.
But something I DO agree with...Chris Cocks doesn't play D&D, & it shows.
DM, player & homebrewer(Current homebrew project is an unofficial conversion of SBURB/SGRUB from Homestuck into DND 5e)
Once made Maxwell's Silver Hammer come down upon Strahd's head to make sure he was dead.
Always study & sharpen philosophical razors. They save a lot of trouble.
I cannot speak for the mentalities of other people I know nothing about.
I never said it was a peer-reviewed scientific journal.
For someone insisting others provide articles where are all the articles that support those mights of yours?
I'll give this thread a chance to get back on topic as we do understand this is a concerning issue for some of our players, but if it derails again into being nonconstructive and off-topic arguments, we will have to lock it. Please also keep in mind our rules:
6.1 • Company Criticism
Public criticism of Wizards of the Coast (Or Hasbro in this case) is not itself against the rules; however, users are still responsible for ensuring their conduct does not violate any other site rules.
Possible violations include, but are not limited to:
And that the topic must stay related to D&D. We understand that it's natural to want to get into a discussion focusing mainly on AI itself, but please make sure you are relating that back to D&D. You can always suggest resources for further reading if it's going to go off topic. Likewise, try to make sure you're not going too deep into speculation without linking back to current evidence. We understand being concerned and sharing those concerns, but if a topic looks like it's only serving to generate toxicity and panic without trying to ground it in reality or have a goal in mind for discussion, the thread will have to be locked.
And if you find yourself constantly replying to one specific other user, consider if this is still about the topic or more about debating them and if you need to step back and disengage. If someone has broken the above rules, engaging with such is only going to escalate it, not solve it. We will not tolerate users harassing and disrespecting each other over this.
D&D Beyond ToS || D&D Beyond Support
The most important takeaway from Cocks’ most recent set of comments: He is not saying anything new. At its core, he is reiterating the same basic points in all his prior commentary on AI’s future in D&D. That consistency is a good thing - it means their goal remains focused, tailored, and is not spiraling out of control as corporate AI decisions often do.
Most importantly, he still is not discussing using generative AI in the production of official D&D materials. As with every other interview, his vision for AI in D&D remains optional player and DM-facing generative tools. This means those who do not wish to interface with AI will not be required to do so - you can continue to support the AI-free base game without opting into AI generated elements.
Now, let us look at the products he is proposing. Nothing he proposes is novel - people already use AI to generate campaign art, assist with preparation, or even develop larger aspects of the campaign. What does that mean? There are lots of problems with AI - ethical, legal, environmental, etc. - but those problems exist regardless of who performs the generation AI. For environmental damage, the damage is going to be done regardless - folks are not going to stop using AI to assist with their games, so Wizards releasing their own product likely will result in little to no additional damage than already was to likely occur. The legal and ethical concerns are also less with a Wizards-owned product - at least they would be training their models on their own copyrighted materials, as opposed to a third-party flagrantly training an AI on Wizards’ property. A Wizards-produced generative AI might not be perfect - but there are a lot of ways it is better than the current situation.
Turning to the actual content Wizards is working on, there is no question generative AI can be a useful tool to DMs. It is no secret D&D struggles from a DMing shortage - five decades of efforts have resulted in little to no change in the fundamental fact only about 20% of players DM, and not all of them are DMing at all the times. A large portion of this is a perception problem - the amount of effort required to DM is more than being a player. Moreover, being a DM requires a lot of different skills and tasks - encounter design, storytelling, worldbuilding, dungeon design, etc. - not all of which everyone likes. I, for example, know some folks who enjoy the idea of building a world and telling a story in it… but they do not enjoy the mechanical elements required to make that work. Finally, there is the aspect of time management - even a DM who might have all the necessary skills might prefer to spend their time designing something like an epic dungeon… instead of a half dozen throw away NPCs the players might only exchange a few sentences with.
Those are all things generative AI can help with - reducing the barrier to entry for potential DMs by lightening the load and making up for their perceived shortcomings, as well as providing experienced DMs a tool to better focus their particularized skills. Those are longstanding problems decades of work cannot fix - which is exactly the kind of problem where it makes sense to apply generative AI. As Cocks talks about it, their goal is less to create a full experience that sidesteps the human element - their goal is to create a set of tools to augment and support the human element.
All of that is greatly improved if the genAI is trained by Wizards and for Wizards game - it helps keep the AI focused and providing helpful, tailored results, resulting in a better, more useful product.
Now, I am sure there are folks who are upset because some of the human element is being replaced - but the reality? We have been replacing that human element for five decades now - replacing it with “I just will not DM.” Given the DM shortage, I am hard pressed to say “human DM with AI tools” is a worse situation than no DM at all, or people who want to DM being unable to do so because they lack time or a mind for some aspects of DMing.
Overall, am I happy we live in a world where generative AI holds all kinds of problems? No. Would I use these tools? Perhaps for art - my drawing skills leave much to be desired - but not for other content. Am I going to begrudge Wizards for reading reality and realizing “someone will do this, it might as well be us?” Of course not, particularly since it adds no additional harm and as they remain committed to it being an optional player-facing tool and not an element of the game’s code design.
In a conversation about generative AI, references to "AI" may be assumed to be about generative.
Furthermore, generative systems are what's being discussed in the vast majority of the public conversation on the topic.
In either event, attributing successes of machine learning to "AI" in a conversation about generative systems is misdirection. Most machine learning systems are specialized to task, and do things that the LLM chatbots everybody's talking about can't do reliably, if at all. ChatGPT cannot design or operate a fusion reactor, and never will be able to, and if anybody's gonna try, I'd like some advance warning. :)
"Systems Architecture and Engineering" doesn't necessarily have anything to do with AI or machine learning. (And, since you didn't say you did research on the subject, I must assume you didn't.)
Anyway, credentialism is irrelevant to a conversation about what people in general are talking about when they say "AI". Conflating the different types of systems is the enemy of clarity.
Given the tendency of people who are actively working in the field to make wildly unsupportable claims about the tech, I think credentials might even be counterproductive to useful conversation. :)
I've iterated this previously, and every time I use something claiming to generate content by request using "AI", it reinforces the notion that AI doesn't understand nuance. The responses still get increasingly worse with the higher specificity of my requests.
It primarily looks for keywords and very little to do with grammar. (I can get a D&D backstory by just typing "dragonborn backstory".) Grammar makes all the difference to Humans. An AI might interpret a saying I like to say as the same sentence: "Do not what you'd regret doing. Do what you'd regret not doing." (I don't know if anyone said it before I did, but I've been saying this for a couple of decades at the least. Words to live by, I think they are.) Moving one word changed the nuance of the statements.
I remember when Chat-GPT couldn't count the number of Rs in Strawberry and in Raspberry and Bs in Bubble while correctly counting Ns in antennae just a year ago, but all the time, it would spell the words correctly. I narrowed it down to what seemed like parsing errors. I managed to get it to tell me how many Rs are between the E and Y, and it told me there was one R even though it stated that the E is in the 7th position and the Y is 10th position and R was the only letter between them.
Reliance on AI without close monitoring is going to return foundational errors, even for something as simple as asking AI to verify something that's obvious to people.
Shackling the AI to prevent those errors is a crude, brutal hack and not an actual solution to the issues, but that's what has been happening more and more lately. It's easier to just tell the application to not do something than to try to figure out why it did it in the first place and see how to organically train it out of that response. We've come across more than a few instances when shackling AI changed the problem from one thing to another equal but opposite problem.
For the services that tap into Internet searches, the common generative AI doesn't know the difference between fact, a rumor, an opinion, and a lie.
AI is a tool still in its infancy with coarse safety measures in place that put blinders on it. Be careful in its use. Keep both hands on the AI wheel at all times.
Swing a hammer without looking and someone's finger is going to get smashed.
(I will state this. Microsoft's AI is actively avoiding content that is not specifically licensed to Microsoft for its use or is not verified as free public use. Its art generation is limited for it, but at least, it's not digging up people's stuff without their permissions. I found this surprising after their irresponsible and disastrous release of the Tay chatbot on Twitter.)
Human. Male. Possibly. Don't be a divider.
My characters' backgrounds are written like instruction manuals rather than stories. My opinion and preferences don't mean you're wrong.
I am 99.7603% convinced that the digital dice are messing with me. I roll high when nobody's looking and low when anyone else can see.🎲
“It's a bit early to be thinking about an epitaph. No?” will be my epitaph.