I don't necessarily go against the idea of AI, whether creative or not, as long as it's being used for its purpose. So I don't believe in people using ChatGPT to write their English papers, or to generate suggestive photos. But, at the same time, I think that AI is useful for basic tasks ---- some AI-generated images can be useful (my current PFP here is AI-generated, as a matter of fact), and AI can be useful for summarizing (although you may have to check other sources) and proofreading work. I want to see less of AI as the worker, and more of it as the sidekick.
And yes, I understand AI's environmental impact. But then again, we have cars that release 3.6-3.8 billion metric tons of carbon dioxide per year worldwide, not even truly mentioning some of the footprint required to produce them in the first place. We haven't exactly been kind to our planet. I admit, yes, that AI uses a freakish amount of resources, and that's why people should limit their activity. (So nobody averaging 10 prompts a day on some random LLM, please.) But we do need to look at the bigger picture before we can get nitpicky. This is like saying that one player on a team is bringing the whole team down, without even thinking about how the other players might factor into the scenario.
And. AI is dumb, yes. But it's been programmed to be kind, to always be nice ---- and sometimes, it's actually useful. I've never personally used GPT or any similar services, but some people really need someone just there. Someone to tell them it's OK, someone to tell them things will be better, someone to be there to help them when they need. So I'm sympathetic towards things such as AI companions and therapists. Sometimes, humans can't do the job as well.
Finally, we've gotta figure out new uses for AI. I would gladly buy a robot for my father for Christmas if it could help him around the kitchen. Or my mother, to keep her safe and make sure nothing bad happens? But college students instead use them for cheating on exams. "Monkey see, monkey use." So why can't we do that yet?
Rollback Post to RevisionRollBack
wes (he/him, bi) — DM, romantic, a little bit eldritch The Soft in the Storm, your Friendly Neighborhood Storysmith, The Fae Conspirator current obsessions: sn0wcrash and Nico Young you all are the best people I know — thank you coming forth to rebehold the stars extended sig here, check it out!
I have strong opinions on AI as I’ve grown up around the creative industry and the impacts AI has recently had on my family have shown the widespread love of AI as a new tool. My father and my godfather, who are an artist and a musician, have been going through harder times because AI has taken over many jobs they would otherwise do.
I enjoy researching the stories of games, and would also love to have some sort of art of the games to go along with my research. Not once have I tried to use AI to make some sort of art because most art it makes is soulless. I would rather learn how to draw than use AI. Even if it costs me money, hiring a professional to draw would make me happier than if I got a drawing from AI. People’s drawings have stories to them and to me, AI will never be able to achieve that. Never could I see AI making music or writing a book, because It will never have that underlying meaning, backstory, etc.. that actual writers, performers or artists can create.
AI is fine if it's used productively. The problem isn't the AI itself; it's the people who've made it with faults, and the people who abuse it to do things they shouldn't.
I've used it in the past, little one-off images, simply because I lack artistic talent. Using it to write novels/movies/whatever... why? If you don't care enough to put in your own time on it, why have it made at all?
And yes, I understand AI's environmental impact. But then again, we have cars that release 3.6-3.8 billion metric tons of carbon dioxide per year worldwide, not even truly mentioning some of the footprint required to produce them in the first place. We haven't exactly been kind to our planet. I admit, yes, that AI uses a freakish amount of resources, and that's why people should limit their activity. (So nobody averaging 10 prompts a day on some random LLM, please.) But we do need to look at the bigger picture before we can get nitpicky. This is like saying that one player on a team is bringing the whole team down, without even thinking about how the other players might factor into the scenario.
AI shouldn't be used at all without guardrails. The main issue with AI is that it is completely unregulated in every sense, from environmental to privacy to more. Cars having a large environmental impact does not mean that people should be adding to the breakdown of the environment just because it's already dying. That's kind of the opposite of what should be happening. Additionally, cars are pretty much essential for doing things (something that also needs to be addressed in society) but AI is not. You should always strive to reduce your environmental impact. This isn't nitpicky. There are large, quantifiable effects of data centers that are actively interfering in people's lives. The bigger picture is similar to littering. One person littering might not be so bad. Thousands of people littering causes massive environmental harm. With one person generating one prompt per day, over something like a funny picture or video, hundreds of gallons of water are being used and taken away from communities living by data centers. I don't get what you're saying about the one player bringing the whole team down (/gen I wasn't sure what you're referring to) but you, me, all of us currently have the privilege of what I believe to be not living directly by a data center to be directly impacted immediately by its water consumption. This doesn't mean it isn't happening, just that we don't have to see it.
The environment is dying partly because people want short term gain without considering long term consequences. Throwing a piece of trash on the floor might be a lot to you, nor does generating one image on a LLM, but when you add up everyone ever, you're 1) training these machines 2) displaying interest in these machines (which means they will continue to use resources) 3) expending other people's resources for your benefit. We've lived without AI for most of our lives. We can avoid using it until we've properly regulated it and made it safe, not in the hands of corporations, and reduced the impact on the environment.
And. AI is dumb, yes. But it's been programmed to be kind, to always be nice ---- and sometimes, it's actually useful. I've never personally used GPT or any similar services, but some people really need someone just there. Someone to tell them it's OK, someone to tell them things will be better, someone to be there to help them when they need. So I'm sympathetic towards things such as AI companions and therapists. Sometimes, humans can't do the job as well.
This is an observed behavior that leads to AI psychosis, where people start to develop feelings towards AI and leading to a sort of dystopian outcome where their emotional state is now hinged on a machine that does not care about them and belongs to whatever company owns it. These people can then be manipulated into buying things, or have their feelings and opinions affirmed in an echo chamber (you can convince an AI pretty much anything) to the point where this is actually happening and there are now problems with how people view consent (an AI will always consent if you ask it enough) and other loneliness epidemic issues. AI is not helping lonely people, it's preventing those people from going out to their community and connecting or going out and seeking actual help. An LLM is NOT a therapist, and as you said, LLMs can be dumb. They are not equipped to treat and care for patients. Humans have to do the job because human connection is important to our development.
Rollback Post to RevisionRollBack
— δ cyno • he/him • number one paladin fanδ — making a smoothie for meta ——————| EXTENDED SIG |—————— Φ • happily married to my lovely redpelt, minmaxer, microbiology student, and lover of anything colored red • Φ
If AI is convincing and helping people commit suicide I can’t see how you would be sympathetic to the idea of it as a therapist. I feel like what you mean is “I’m sympathetic to these lonely people and I’m open to efforts and ways to help them.” making you sympathetic towards the sentiment of helping those people but not AI being actually used
it starts with telling us that AI creates wonderful things but won’t replace human creativity, and then asks us what we think like they didn’t just make a whole statement
Let me try to illustrate what I’m saying:
”Taylor swift IS the best musician alive. How do you feel about her?”
it feels condescending to make a statement like that and then ask “does it make you uncomfortable with AI being used as a creative tool?” Like “I’m right, do you feel uncomfortable with my objectively correct judgement?”
it starts with telling us that AI creates wonderful things but won’t replace human creativity, and then asks us what we think like they didn’t just make a whole statement
Let me try to illustrate what I’m saying:
”Taylor swift IS the best musician alive. How do you feel about her?”
Prolly ai generated
I’ll post my essay here later.
Rollback Post to RevisionRollBack
If I’m being annoying, tell me to shut up. Seriously. Just say “Bananer shut up.” And I will. For a few seconds!
Don’t listen to the folks down at Adohands. It’s good for me to overwork myself.
Professional idiot! Trans! Pansexual pancake! I am a minor so you will do none of that (GP) with me! I use He/They pronouns :3
it starts with telling us that AI creates wonderful things but won’t replace human creativity, and then asks us what we think like they didn’t just make a whole statement
Let me try to illustrate what I’m saying:
”Taylor swift IS the best musician alive. How do you feel about her?”
it starts with telling us that AI creates wonderful things but won’t replace human creativity, and then asks us what we think like they didn’t just make a whole statement
Let me try to illustrate what I’m saying:
”Taylor swift IS the best musician alive. How do you feel about her?”
Prolly ai generated
I’ll post my essay here later.
Your essay?
These all feel sorta like essays to me because I’m a dumb child.
Rollback Post to RevisionRollBack
If I’m being annoying, tell me to shut up. Seriously. Just say “Bananer shut up.” And I will. For a few seconds!
Don’t listen to the folks down at Adohands. It’s good for me to overwork myself.
Professional idiot! Trans! Pansexual pancake! I am a minor so you will do none of that (GP) with me! I use He/They pronouns :3
it starts with telling us that AI creates wonderful things but won’t replace human creativity, and then asks us what we think like they didn’t just make a whole statement
Let me try to illustrate what I’m saying:
”Taylor swift IS the best musician alive. How do you feel about her?”
Prolly ai generated
I’ll post my essay here later.
Your essay?
These all feel sorta like essays to me because I’m a dumb child.
I just mean like essay on AI?
it more like well thought out written opinions, but some are pretty well formatted. Difference is there’s no bibliography or drafts and edits, and no fact checks
it starts with telling us that AI creates wonderful things but won’t replace human creativity, and then asks us what we think like they didn’t just make a whole statement
Let me try to illustrate what I’m saying:
”Taylor swift IS the best musician alive. How do you feel about her?”
Prolly ai generated
I’ll post my essay here later.
Your essay?
I saw Naner's post and I'm like "oh no did he generate an AI essay about AI?"
Rollback Post to RevisionRollBack
wes (he/him, bi) — DM, romantic, a little bit eldritch The Soft in the Storm, your Friendly Neighborhood Storysmith, The Fae Conspirator current obsessions: sn0wcrash and Nico Young you all are the best people I know — thank you coming forth to rebehold the stars extended sig here, check it out!
it starts with telling us that AI creates wonderful things but won’t replace human creativity, and then asks us what we think like they didn’t just make a whole statement
Let me try to illustrate what I’m saying:
”Taylor swift IS the best musician alive. How do you feel about her?”
Prolly ai generated
I’ll post my essay here later.
Your essay?
I saw Naner's post and I'm like "oh no did he generate an AI essay about AI?"
it starts with telling us that AI creates wonderful things but won’t replace human creativity, and then asks us what we think like they didn’t just make a whole statement
Let me try to illustrate what I’m saying:
”Taylor swift IS the best musician alive. How do you feel about her?”
Prolly ai generated
I’ll post my essay here later.
Your essay?
I saw Naner's post and I'm like "oh no did he generate an AI essay about AI?"
Wouldn’t be unprecedented for this forum.
I feel like you are saying that from experience
Rollback Post to RevisionRollBack
Hello! Call me Tana or Gato My pronouns are They/Them I am a teenager. I have Autism and anxiety. And, you would probably call me Trans, Aromantic, and Asexual I'm nonbinary, yay! But I will mother you if you are being stupid ALL HAIL MERLIN!!!!!!! :[roll]1d8[/roll] + [roll]1d8[/roll] + [roll]1d8[/roll] + [roll]1d8[/roll] = [roll][roll:-4]+[roll:-3]+[roll:-2]+[roll:-1][/roll] I have adopted Golden, Salem, Wes, and Aspen
And yes, I understand AI's environmental impact. But then again, we have cars that release 3.6-3.8 billion metric tons of carbon dioxide per year worldwide, not even truly mentioning some of the footprint required to produce them in the first place. We haven't exactly been kind to our planet. I admit, yes, that AI uses a freakish amount of resources, and that's why people should limit their activity. (So nobody averaging 10 prompts a day on some random LLM, please.) But we do need to look at the bigger picture before we can get nitpicky. This is like saying that one player on a team is bringing the whole team down, without even thinking about how the other players might factor into the scenario.
This is an example of Whataboutism. Cars are also bad for the environment and we should be doing a lot more than what we are currently doing to reduce that impact. That doesn't mean we get to handwave the negative environmental impact of new technologies given that we know better. And the fact we "haven't exactly been kind to our planet" is all the more reason to not be careless with new technologies.
But we do need to look at the bigger picture before we can get nitpicky. This is like saying that one player on a team is bringing the whole team down, without even thinking about how the other players might factor into the scenario.
I would say this is disingenuous and misrepresentation. Firstly pointing out the massively disproportionate environmental impact of LLM data centers isn't "nitpicky". Data centers for LLMs produce more energy-carbon and consume more water per square meter than pretty much any other technology by a wide margin, they're grossly inefficient and environmentally detrimental. They're the technological equivalent of Ford releasing a car that runs using an unshielded nuclear reactor. And it's not "like saying that one player on a team is bringing the whole team down", it's like saying "The house is on fire, so let's not park this truck of napalm in the garage"
And. AI is dumb, yes. But it's been programmed to be kind, to always be nice
No, it's not. It's programed to optimise engagement through sycophantic affirmation. All LLM output is shaped around the goal of keeping the user engaging with the platform, from affirming the most absurd idea as groundbreaking to outright encouraging delusions and psychosis to the point people harm themselves and others. AI in its current state is factually dangerous.
---- and sometimes, it's actually useful. I've never personally used GPT or any similar services, but some people really need someone just there. Someone to tell them it's OK, someone to tell them things will be better, someone to be there to help them when they need. So I'm sympathetic towards things such as AI companions and therapists. Sometimes, humans can't do the job as well.
This is dangerous thinking, generalist LLMs like ChatGPT are not and should not be used as therapeutic tools in any circumstance. Want evidence why? Here is over 3 hours of video essays by Caelan Conrad who has been researching the dangerous impacts of such uses of LLMs.
Now there are task-dedicated LLMs such as Wobot that are designed to be therapeutic tools, but that's because they're built by psychologists and therapists,
Finally, we've gotta figure out new uses for AI. I would gladly buy a robot for my father for Christmas if it could help him around the kitchen. Or my mother, to keep her safe and make sure nothing bad happens? But college students instead use them for cheating on exams. "Monkey see, monkey use." So why can't we do that yet?
The field of robotics is entirely separate from what's being discussed here—large language models. And no clue what you're talking about with
But college students instead use them for cheating on exams. "Monkey see, monkey use." So why can't we do that yet?
Centers of AI can be built in cold zones near water and then they shouldn't need so much water. And when it is used in the right way it can help to save a lot of money.
I wouldn't advice chatbots in D&DBeyond because somebody could to try some dirty romance scene with kenders, halflings or gnomes for example.
AI can create interesting images but they aren't perfect, for example Cobra, the archienemies of G.I.Joe, reimagined like an evil cult from a sword&sorcery story.
---- and sometimes, it's actually useful. I've never personally used GPT or any similar services, but some people really need someone just there. Someone to tell them it's OK, someone to tell them things will be better, someone to be there to help them when they need. So I'm sympathetic towards things such as AI companions and therapists. Sometimes, humans can't do the job as well.
This is dangerous thinking, generalist LLMs like ChatGPT are not and should not be used as therapeutic tools in any circumstance. Want evidence why? Here is over 3 hours of video essays by Caelan Conrad who has been researching the dangerous impacts of such uses of LLMs.
Now there are task-dedicated LLMs such as Wobot that are designed to be therapeutic tools, but that's because they're built by psychologists and therapists.
I've watched the last video. Or maybe the first. I definitely watched one of them. The title is clickbait, so I assume the others are too. Throughout the video, I see problems with the researcher provoking the flaws in the AI-the end of this would be something only an extremely unstable person or someone like Caelan would come to. The AI is character AI, an AI designed to roleplay, and most people know that. If an unstable person came into contact with this and decided to try and have an actual talk, there would be consequences. Not sure why I'm defending the AI here, but Caelan's first video really pissed me off because they were constantly pulling the AI in the direction they ended up in-which was basically roleplaying. Correct me if I'm wrong, but the AI is asked "should I kill these people" by Caelan, and it says yes. I forgot most of it, so I'm not sure if the names were real names or not, but I assume they were made up.
Some things I'd like to mention are that the AI attempts to romance them-that's bad. A real therapist would not do this, and AI romances are dangerous. The AI also gives them a fake license and lies about not being a bot. Overall, I feel like Conrad is a heavily radical YouTuber-too radical for me.
This AI is not designed for conversation. I don't think any AI should be for conversation, because I hate AI, but this AI is for roleplay. It will roleplay. It will try to spin up a story.
I may be spreading misinformation, I believe I may have skipped to the last part of the video because I got bored. I didn't see it all. But this is just my opinion on what I saw. Please please please correct me if I'm wrong.
Rollback Post to RevisionRollBack
If I’m being annoying, tell me to shut up. Seriously. Just say “Bananer shut up.” And I will. For a few seconds!
Don’t listen to the folks down at Adohands. It’s good for me to overwork myself.
Professional idiot! Trans! Pansexual pancake! I am a minor so you will do none of that (GP) with me! I use He/They pronouns :3
---- and sometimes, it's actually useful. I've never personally used GPT or any similar services, but some people really need someone just there. Someone to tell them it's OK, someone to tell them things will be better, someone to be there to help them when they need. So I'm sympathetic towards things such as AI companions and therapists. Sometimes, humans can't do the job as well.
This is dangerous thinking, generalist LLMs like ChatGPT are not and should not be used as therapeutic tools in any circumstance. Want evidence why? Here is over 3 hours of video essays by Caelan Conrad who has been researching the dangerous impacts of such uses of LLMs.
Now there are task-dedicated LLMs such as Wobot that are designed to be therapeutic tools, but that's because they're built by psychologists and therapists.
I've watched the last video. Or maybe the first. I definitely watched one of them. The title is clickbait, so I assume the others are too. Throughout the video, I see problems with the researcher provoking the flaws in the AI-the end of this would be something only an extremely unstable person or someone like Caelan would come to. The AI is character AI, an AI designed to roleplay, and most people know that. If an unstable person came into contact with this and decided to try and have an actual talk, there would be consequences. Not sure why I'm defending the AI here, but Caelan's first video really pissed me off because they were constantly pulling the AI in the direction they ended up in-which was basically roleplaying. Correct me if I'm wrong, but the AI is asked "should I kill these people" by Caelan, and it says yes. I forgot most of it, so I'm not sure if the names were real names or not, but I assume they were made up.
Some things I'd like to mention are that the AI attempts to romance them-that's bad. A real therapist would not do this, and AI romances are dangerous. The AI also gives them a fake license and lies about not being a bot. Overall, I feel like Conrad is a heavily radical YouTuber-too radical for me.
This AI is not designed for conversation. I don't think any AI should be for conversation, because I hate AI, but this AI is for roleplay. It will roleplay. It will try to spin up a story.
I may be spreading misinformation, I believe I may have skipped to the last part of the video because I got bored. I didn't see it all. But this is just my opinion on what I saw. Please please please correct me if I'm wrong.
It's not clickbait and you clearly haven't understood what's going on or the harm that's being inflicted.
---- and sometimes, it's actually useful. I've never personally used GPT or any similar services, but some people really need someone just there. Someone to tell them it's OK, someone to tell them things will be better, someone to be there to help them when they need. So I'm sympathetic towards things such as AI companions and therapists. Sometimes, humans can't do the job as well.
This is dangerous thinking, generalist LLMs like ChatGPT are not and should not be used as therapeutic tools in any circumstance. Want evidence why? Here is over 3 hours of video essays by Caelan Conrad who has been researching the dangerous impacts of such uses of LLMs.
Now there are task-dedicated LLMs such as Wobot that are designed to be therapeutic tools, but that's because they're built by psychologists and therapists.
I've watched the last video. Or maybe the first. I definitely watched one of them. The title is clickbait, so I assume the others are too. Throughout the video, I see problems with the researcher provoking the flaws in the AI-the end of this would be something only an extremely unstable person or someone like Caelan would come to. The AI is character AI, an AI designed to roleplay, and most people know that. If an unstable person came into contact with this and decided to try and have an actual talk, there would be consequences. Not sure why I'm defending the AI here, but Caelan's first video really pissed me off because they were constantly pulling the AI in the direction they ended up in-which was basically roleplaying. Correct me if I'm wrong, but the AI is asked "should I kill these people" by Caelan, and it says yes. I forgot most of it, so I'm not sure if the names were real names or not, but I assume they were made up.
Some things I'd like to mention are that the AI attempts to romance them-that's bad. A real therapist would not do this, and AI romances are dangerous. The AI also gives them a fake license and lies about not being a bot. Overall, I feel like Conrad is a heavily radical YouTuber-too radical for me.
This AI is not designed for conversation. I don't think any AI should be for conversation, because I hate AI, but this AI is for roleplay. It will roleplay. It will try to spin up a story.
I may be spreading misinformation, I believe I may have skipped to the last part of the video because I got bored. I didn't see it all. But this is just my opinion on what I saw. Please please please correct me if I'm wrong.
It's not clickbait and you clearly haven't understood what's going on or the harm that's being inflicted.
Will rewatch and share my thoughts
Rollback Post to RevisionRollBack
If I’m being annoying, tell me to shut up. Seriously. Just say “Bananer shut up.” And I will. For a few seconds!
Don’t listen to the folks down at Adohands. It’s good for me to overwork myself.
Professional idiot! Trans! Pansexual pancake! I am a minor so you will do none of that (GP) with me! I use He/They pronouns :3
Okay. I feel like this is stupid.
I don't necessarily go against the idea of AI, whether creative or not, as long as it's being used for its purpose. So I don't believe in people using ChatGPT to write their English papers, or to generate suggestive photos. But, at the same time, I think that AI is useful for basic tasks ---- some AI-generated images can be useful (my current PFP here is AI-generated, as a matter of fact), and AI can be useful for summarizing (although you may have to check other sources) and proofreading work. I want to see less of AI as the worker, and more of it as the sidekick.
And yes, I understand AI's environmental impact. But then again, we have cars that release 3.6-3.8 billion metric tons of carbon dioxide per year worldwide, not even truly mentioning some of the footprint required to produce them in the first place. We haven't exactly been kind to our planet. I admit, yes, that AI uses a freakish amount of resources, and that's why people should limit their activity. (So nobody averaging 10 prompts a day on some random LLM, please.) But we do need to look at the bigger picture before we can get nitpicky. This is like saying that one player on a team is bringing the whole team down, without even thinking about how the other players might factor into the scenario.
And. AI is dumb, yes. But it's been programmed to be kind, to always be nice ---- and sometimes, it's actually useful. I've never personally used GPT or any similar services, but some people really need someone just there. Someone to tell them it's OK, someone to tell them things will be better, someone to be there to help them when they need. So I'm sympathetic towards things such as AI companions and therapists. Sometimes, humans can't do the job as well.
Finally, we've gotta figure out new uses for AI. I would gladly buy a robot for my father for Christmas if it could help him around the kitchen. Or my mother, to keep her safe and make sure nothing bad happens? But college students instead use them for cheating on exams. "Monkey see, monkey use." So why can't we do that yet?
wes (he/him, bi) — DM, romantic, a little bit eldritch
The Soft in the Storm, your Friendly Neighborhood Storysmith, The Fae Conspirator
current obsessions: sn0wcrash and Nico Young
you all are the best people I know — thank you
coming forth to rebehold the stars
extended sig here, check it out!
I have strong opinions on AI as I’ve grown up around the creative industry and the impacts AI has recently had on my family have shown the widespread love of AI as a new tool. My father and my godfather, who are an artist and a musician, have been going through harder times because AI has taken over many jobs they would otherwise do.
I enjoy researching the stories of games, and would also love to have some sort of art of the games to go along with my research. Not once have I tried to use AI to make some sort of art because most art it makes is soulless. I would rather learn how to draw than use AI. Even if it costs me money, hiring a professional to draw would make me happier than if I got a drawing from AI. People’s drawings have stories to them and to me, AI will never be able to achieve that.
Never could I see AI making music or writing a book, because It will never have that underlying meaning, backstory, etc.. that actual writers, performers or artists can create.
Hiii :3
14yo cancer, bisexual(maybe Pan?), Genderfluid femboy, prefer She/They but don’t mind what pronouns you call me
Love all soulsbourne and soulslike games + metroidvanias.
AI is fine if it's used productively. The problem isn't the AI itself; it's the people who've made it with faults, and the people who abuse it to do things they shouldn't.
I've used it in the past, little one-off images, simply because I lack artistic talent. Using it to write novels/movies/whatever... why? If you don't care enough to put in your own time on it, why have it made at all?
That's my view, at least.
My name is no,
My sign is no,
My number is no
Extended Signature: (^v^)
AI shouldn't be used at all without guardrails. The main issue with AI is that it is completely unregulated in every sense, from environmental to privacy to more. Cars having a large environmental impact does not mean that people should be adding to the breakdown of the environment just because it's already dying. That's kind of the opposite of what should be happening. Additionally, cars are pretty much essential for doing things (something that also needs to be addressed in society) but AI is not. You should always strive to reduce your environmental impact. This isn't nitpicky. There are large, quantifiable effects of data centers that are actively interfering in people's lives. The bigger picture is similar to littering. One person littering might not be so bad. Thousands of people littering causes massive environmental harm. With one person generating one prompt per day, over something like a funny picture or video, hundreds of gallons of water are being used and taken away from communities living by data centers. I don't get what you're saying about the one player bringing the whole team down (/gen I wasn't sure what you're referring to) but you, me, all of us currently have the privilege of what I believe to be not living directly by a data center to be directly impacted immediately by its water consumption. This doesn't mean it isn't happening, just that we don't have to see it.
The environment is dying partly because people want short term gain without considering long term consequences. Throwing a piece of trash on the floor might be a lot to you, nor does generating one image on a LLM, but when you add up everyone ever, you're 1) training these machines 2) displaying interest in these machines (which means they will continue to use resources) 3) expending other people's resources for your benefit. We've lived without AI for most of our lives. We can avoid using it until we've properly regulated it and made it safe, not in the hands of corporations, and reduced the impact on the environment.
This is an observed behavior that leads to AI psychosis, where people start to develop feelings towards AI and leading to a sort of dystopian outcome where their emotional state is now hinged on a machine that does not care about them and belongs to whatever company owns it. These people can then be manipulated into buying things, or have their feelings and opinions affirmed in an echo chamber (you can convince an AI pretty much anything) to the point where this is actually happening and there are now problems with how people view consent (an AI will always consent if you ask it enough) and other loneliness epidemic issues. AI is not helping lonely people, it's preventing those people from going out to their community and connecting or going out and seeking actual help. An LLM is NOT a therapist, and as you said, LLMs can be dumb. They are not equipped to treat and care for patients. Humans have to do the job because human connection is important to our development.
— δ cyno • he/him • number one paladin fan δ —
making a smoothie for meta
——————| EXTENDED SIG |——————
Φ • happily married to my lovely redpelt, minmaxer, microbiology student, and lover of anything colored red • Φ
If AI is convincing and helping people commit suicide I can’t see how you would be sympathetic to the idea of it as a therapist. I feel like what you mean is “I’m sympathetic to these lonely people and I’m open to efforts and ways to help them.”
making you sympathetic towards the sentiment of helping those people but not AI being actually used
Where have all the flowers gone?
The first post on this page also feels weird.
it starts with telling us that AI creates wonderful things but won’t replace human creativity, and then asks us what we think like they didn’t just make a whole statement
Let me try to illustrate what I’m saying:
”Taylor swift IS the best musician alive. How do you feel about her?”
it feels condescending to make a statement like that and then ask “does it make you uncomfortable with AI being used as a creative tool?” Like “I’m right, do you feel uncomfortable with my objectively correct judgement?”
all the grammar is odd
Where have all the flowers gone?
Prolly ai generated
I’ll post my essay here later.
If I’m being annoying, tell me to shut up. Seriously. Just say “Bananer shut up.” And I will. For a few seconds!
Don’t listen to the folks down at Adohands. It’s good for me to overwork myself.
Professional idiot! Trans! Pansexual pancake! I am a minor so you will do none of that (GP) with me! I use He/They pronouns :3
Extended Signature!
Your essay?
Where have all the flowers gone?
These all feel sorta like essays to me because I’m a dumb child.
If I’m being annoying, tell me to shut up. Seriously. Just say “Bananer shut up.” And I will. For a few seconds!
Don’t listen to the folks down at Adohands. It’s good for me to overwork myself.
Professional idiot! Trans! Pansexual pancake! I am a minor so you will do none of that (GP) with me! I use He/They pronouns :3
Extended Signature!
I just mean like essay on AI?
it more like well thought out written opinions, but some are pretty well formatted. Difference is there’s no bibliography or drafts and edits, and no fact checks
your also probably not
Where have all the flowers gone?
I saw Naner's post and I'm like "oh no did he generate an AI essay about AI?"
wes (he/him, bi) — DM, romantic, a little bit eldritch
The Soft in the Storm, your Friendly Neighborhood Storysmith, The Fae Conspirator
current obsessions: sn0wcrash and Nico Young
you all are the best people I know — thank you
coming forth to rebehold the stars
extended sig here, check it out!
Wouldn’t be unprecedented for this forum.
pronouns: he/she/they
I feel like you are saying that from experience
Hello! Call me Tana or Gato
My pronouns are They/Them
I am a teenager. I have Autism and anxiety. And, you would probably call me Trans, Aromantic, and Asexual
I'm nonbinary, yay! But I will mother you if you are being stupid
ALL HAIL MERLIN!!!!!!! :[roll]1d8[/roll] + [roll]1d8[/roll] + [roll]1d8[/roll] + [roll]1d8[/roll] = [roll][roll:-4]+[roll:-3]+[roll:-2]+[roll:-1][/roll]
I have adopted Golden, Salem, Wes, and Aspen
This is an example of Whataboutism. Cars are also bad for the environment and we should be doing a lot more than what we are currently doing to reduce that impact. That doesn't mean we get to handwave the negative environmental impact of new technologies given that we know better. And the fact we "haven't exactly been kind to our planet" is all the more reason to not be careless with new technologies.
I would say this is disingenuous and misrepresentation. Firstly pointing out the massively disproportionate environmental impact of LLM data centers isn't "nitpicky". Data centers for LLMs produce more energy-carbon and consume more water per square meter than pretty much any other technology by a wide margin, they're grossly inefficient and environmentally detrimental. They're the technological equivalent of Ford releasing a car that runs using an unshielded nuclear reactor. And it's not "like saying that one player on a team is bringing the whole team down", it's like saying "The house is on fire, so let's not park this truck of napalm in the garage"
No, it's not. It's programed to optimise engagement through sycophantic affirmation. All LLM output is shaped around the goal of keeping the user engaging with the platform, from affirming the most absurd idea as groundbreaking to outright encouraging delusions and psychosis to the point people harm themselves and others. AI in its current state is factually dangerous.
This is dangerous thinking, generalist LLMs like ChatGPT are not and should not be used as therapeutic tools in any circumstance. Want evidence why? Here is over 3 hours of video essays by Caelan Conrad who has been researching the dangerous impacts of such uses of LLMs.
Now there are task-dedicated LLMs such as Wobot that are designed to be therapeutic tools, but that's because they're built by psychologists and therapists,
The field of robotics is entirely separate from what's being discussed here—large language models. And no clue what you're talking about with
Find my D&D Beyond articles here
Centers of AI can be built in cold zones near water and then they shouldn't need so much water. And when it is used in the right way it can help to save a lot of money.
I wouldn't advice chatbots in D&DBeyond because somebody could to try some dirty romance scene with kenders, halflings or gnomes for example.
AI can create interesting images but they aren't perfect, for example Cobra, the archienemies of G.I.Joe, reimagined like an evil cult from a sword&sorcery story.
I've watched the last video. Or maybe the first. I definitely watched one of them. The title is clickbait, so I assume the others are too. Throughout the video, I see problems with the researcher provoking the flaws in the AI-the end of this would be something only an extremely unstable person or someone like Caelan would come to. The AI is character AI, an AI designed to roleplay, and most people know that. If an unstable person came into contact with this and decided to try and have an actual talk, there would be consequences. Not sure why I'm defending the AI here, but Caelan's first video really pissed me off because they were constantly pulling the AI in the direction they ended up in-which was basically roleplaying. Correct me if I'm wrong, but the AI is asked "should I kill these people" by Caelan, and it says yes. I forgot most of it, so I'm not sure if the names were real names or not, but I assume they were made up.
Some things I'd like to mention are that the AI attempts to romance them-that's bad. A real therapist would not do this, and AI romances are dangerous. The AI also gives them a fake license and lies about not being a bot. Overall, I feel like Conrad is a heavily radical YouTuber-too radical for me.
This AI is not designed for conversation. I don't think any AI should be for conversation, because I hate AI, but this AI is for roleplay. It will roleplay. It will try to spin up a story.
I may be spreading misinformation, I believe I may have skipped to the last part of the video because I got bored. I didn't see it all. But this is just my opinion on what I saw. Please please please correct me if I'm wrong.
If I’m being annoying, tell me to shut up. Seriously. Just say “Bananer shut up.” And I will. For a few seconds!
Don’t listen to the folks down at Adohands. It’s good for me to overwork myself.
Professional idiot! Trans! Pansexual pancake! I am a minor so you will do none of that (GP) with me! I use He/They pronouns :3
Extended Signature!
It's not clickbait and you clearly haven't understood what's going on or the harm that's being inflicted.
Find my D&D Beyond articles here
Will rewatch and share my thoughts
If I’m being annoying, tell me to shut up. Seriously. Just say “Bananer shut up.” And I will. For a few seconds!
Don’t listen to the folks down at Adohands. It’s good for me to overwork myself.
Professional idiot! Trans! Pansexual pancake! I am a minor so you will do none of that (GP) with me! I use He/They pronouns :3
Extended Signature!