What you are saying is analogous with saying occurrences do not happen unless someone is there to see them.
Not at all. Sure, things are presumably happening that we have no evidence of -- but we can't make statements about what those things are without evidence.
I'm not sure why you bring up species. No-one is claiming human intelligence from LLMs.
Do you have a specific reason why you think the CEO of a company that publicly stated they will not be using AI generated contet in their products is going to attempt to reverse that decision when pretty much all signs at this point are that it will be met by massive backlash in spite of the commentary one article most of the playerbase is unlikely to read and that the portion who feels most strongly about the topic of AIs is unlikely to be moved by, or is this just a default "they're a CEO, therefore they must be both evil and shortsighted" assumption?
What you are saying is analogous with saying occurrences do not happen unless someone is there to see them.
Not at all. Sure, things are presumably happening that we have no evidence of -- but we can't make statements about what those things are without evidence.
I'm not sure why you bring up species. No-one is claiming human intelligence from LLMs.
Reread what I wrote. I didn't say you 'know' that he / she thinks. I said he / she knows that he / she thinks. Hence why it is immediately followed by the point about how we are conscious of this ourselves. We are all conscious of our own capacity for thought. Including him / her. Whether or not you 'know' he / she is. Your subjective perspective on the matter is irrelevant.
It is analogous with what you are saying. Because you are acting as if you personally must observe whether or not someone is a 'thinking human being' in order for this to be knowable. For you to know it? Perhaps. But they are a thinking human being whether you know it or not.
Do you have a specific reason why you think the CEO of a company that publicly stated they will not be using AI generated contet in their products is going to attempt to reverse that decision when pretty much all signs at this point are that it will be met by massive backlash in spite of the commentary one article most of the playerbase is unlikely to read and that the portion who feels most strongly about the topic of AIs is unlikely to be moved by, or is this just a default "they're a CEO, therefore they must be both evil and shortsighted" assumption?
I wrote that he as the "hope" to be able to do that.
Obviously the quality of the AI generated content at the monent is not good enough to fool people, especially when you have pratice to spot them. I think LLMs need much more training time to be able to do that, if this point will be reached at all.
Do you have a specific reason why you think the CEO of a company that publicly stated they will not be using AI generated contet in their products is going to attempt to reverse that decision when pretty much all signs at this point are that it will be met by massive backlash in spite of the commentary one article most of the playerbase is unlikely to read and that the portion who feels most strongly about the topic of AIs is unlikely to be moved by, or is this just a default "they're a CEO, therefore they must be both evil and shortsighted" assumption?
Because Cocks' rhetoric on the subject of AI mirrors that of other corporate types who are all giddy about it. Who couldn't care less that it is negatively impacting labour and the environment. Cocks was being loose with the truth when he said 'everyone' in the hobby was using it in some capacity for their games. Trying to get us all to embrace its presence. Can't imagine why he might do that! Just because someone is a CEO doesn't necessarily mean that someone is 'evil and shortsighted.' But even the most seemingly wonderful of businesspeople can turn into vicious worker-loathing bullies when there is more money to be made. I have seen this firsthand. It takes an extraordinary degree of naivety to believe Cocks isn't at all hoping attitudes towards the use of generative AI shift so they can meet more of those costs-and-returns goals you even seem to believe they prioritize above all else. And the fact Cocks is only ever singing its praises even given how highly detrimental is its use and consumption to the environment should give you some indication he is far from being on the opposite end of 'evil and shortsighted.' Cocks like so many CEOs right now is probably only reading whatever provides conformation for his own biases about AI. He is clearly reading the bus lit out there on the subject. It's the future! Its impact on the environment? Which is irrefutable. You won't ever hear him talking about that. He will only ever speak well of AI. I know the type. The president of the company for which I work is the same. And the way he also expects everyone around him to share his enthusiasm and to just block out the negative is behavior more becoming that of a cult leader.
Intelligence is fundamentally about what's going on under the hood. If you can't accept discussions involving that, then the discussion is at a dead end.
Talking about what's going on 'under the hood' is fundamentally a dead end, because while we have some idea of what's going on on a fine detail level, at the holistic level required to evaluate whether something 'knows' what it's doing, we don't really know what's going on under the hood... for either humans or modern neural-net AIs. The reason we use observed behavior is because that's the information we have. I don't actually know that you're a thinking human being, I just assume that you are because you behave like one.
We do know what's going on under the hood in LLMs in vastly greater detail than we do in actual brains. The code is comprehensible. The data -- the humongous vectors of token correlation values -- may not be due to its scale, but we know how they work. (For values of "we" that don't include either of us. I may have enough background that I could within a reasonable time frame ( although I know how bad my vector math is), but I've never gone beyond the high-level view.)
The thinking class should not be in favor of it either. Its taking their jobs under the guise of a "thinking" tool. Scientists are our thinking tools. Robots ( Russian for Slave) are already taking over the jobs of blue collar workers. Ai is just the robot of scientists, thinkers.
As for AI in D&D they will take the place of DM's for some simple missions. But will they eventually be creating the campaigns? Or will they just DM missions inside or greater campaigns?
Could this CEO think that they can create an AI that can make up the campaigns and thus get rid of all third party content? All DM's?
Just make a video game like some D&D WOW type. He obviously wants the micro transactions from a system like that. Monthly subscriptions and all.
The only use I have for AI is the replacement of CEOs.
Rollback Post to RevisionRollBack
Met D&D in 1976 while in the Navy, learning Russian, and fell in love. We were steady for several years, then drifted apart and saw other people. Ran into each other a few years ago and the spark kinda rekindled, ya know? It’s an open relationship and I hang with GURPS and DragonQuest, but we’re all mutuals and it’s good. Open communication is the real key to any polydice, it’s the only real rule.
AI might get rid of a lot of jobs, but it also might enable Universal Basic Income.
You do realize that the people who are pushing for increased use of AI all heavily oppose UBI, right? Giving them more power and authority will not have the effect of making it easier to do stuff they hate.
AI does use a crazy amount of energy, but it also enabled advances in Nuclear Fusion. I think it will solve its own energy issue.
What advances?
Rollback Post to RevisionRollBack
Find your own truth, choose your enemies carefully, and never deal with a dragon.
"Canon" is what's factual to D&D lore. "Cannon" is what you're going to be shot with if you keep getting the word wrong.
The only use I have for AI is the replacement of CEOs.
CEOs would be a lot easier (and cost effective) to replace than artists or engineers, etc. But, corporations are already all-consuming; putting glorified markov chains in charge of them would probably hasten our demise.
AI might get rid of a lot of jobs, but it also might enable Universal Basic Income.
You do realize that the people who are pushing for increased use of AI all heavily oppose UBI, right? Giving them more power and authority will not have the effect of making it easier to do stuff they hate.
AI does use a crazy amount of energy, but it also enabled advances in Nuclear Fusion. I think it will solve its own energy issue.
What advances?
Livermore labs recently managed to output more energy through nuclear fusion than was put into the system.
Livermore labs recently managed to output more energy through nuclear fusion than was put into the system.
Relation to generative AI: probably nonexistent (plenty of computer power involved, and there are some other AI tools that might be useful, but generative AI isn't really the right tool for the problem).
AI might get rid of a lot of jobs, but it also might enable Universal Basic Income.
You do realize that the people who are pushing for increased use of AI all heavily oppose UBI, right? Giving them more power and authority will not have the effect of making it easier to do stuff they hate.
AI does use a crazy amount of energy, but it also enabled advances in Nuclear Fusion. I think it will solve its own energy issue.
What advances?
Livermore labs recently managed to output more energy through nuclear fusion than was put into the system.
That has nothing to do with AI*, and "maybe fusion is actually going to work this time" doesn't solve the energy problems, since even if it works, it's years away from production, and decades from ubiquity.
* I suppose it's possible that machine learning was involved in some way, but that's not what people are talking about when they say "AI" these days. It's also an actually useful use of the tech, unlike generative systems.
2.) The advance done on December 5, 2022 at the Lawrence Livermore National Laboratory (LLNL) in California was achieved due to AI
Given that ChatGPT was released on Nov 30, 2022, the odds of it having anything to do with that advance is zero. This doesn't mean some sort of AI tool was not involved, people have been using learning models for physical modeling for a bit, but it's not generative AI.
The only use I have for AI is the replacement of CEOs.
In David Goodhart's Head Hand Heart he talks about how even those working directly on its development are saying it will mostly displace workers whose work is of a 'cognitive' nature.
Some manual labour will always require a human touch. There will always be consumers who will choose a product among the choices available that has been made or grown by hand. Not just by a machine in a factory. And care work will always require a human touch.
It gladdens the hearts of CEOs pushing for its use in their companies to think they will no longer have to pay people to build spreadsheets or analyze data. Or that they will no longer need to hire creatives.
White-color workers are naively thinking it will just mean all the 'hard work' of their 'inferiors' can be done by robots. Their enthusiasm is heralding their own obsoletion.
Nope. I don't know that other people think, I just assume it's the case.
I didn't claim it was deep.
Not at all. Sure, things are presumably happening that we have no evidence of -- but we can't make statements about what those things are without evidence.
I'm not sure why you bring up species. No-one is claiming human intelligence from LLMs.
I think the CEO is just hopeing to cut costs without backlash by replaceing personell with AI.
Pen and Paper Addict
Do you have a specific reason why you think the CEO of a company that publicly stated they will not be using AI generated contet in their products is going to attempt to reverse that decision when pretty much all signs at this point are that it will be met by massive backlash in spite of the commentary one article most of the playerbase is unlikely to read and that the portion who feels most strongly about the topic of AIs is unlikely to be moved by, or is this just a default "they're a CEO, therefore they must be both evil and shortsighted" assumption?
Reread what I wrote. I didn't say you 'know' that he / she thinks. I said he / she knows that he / she thinks. Hence why it is immediately followed by the point about how we are conscious of this ourselves. We are all conscious of our own capacity for thought. Including him / her. Whether or not you 'know' he / she is. Your subjective perspective on the matter is irrelevant.
It is analogous with what you are saying. Because you are acting as if you personally must observe whether or not someone is a 'thinking human being' in order for this to be knowable. For you to know it? Perhaps. But they are a thinking human being whether you know it or not.
I wrote that he as the "hope" to be able to do that.
Obviously the quality of the AI generated content at the monent is not good enough to fool people, especially when you have pratice to spot them. I think LLMs need much more training time to be able to do that, if this point will be reached at all.
Pen and Paper Addict
Because Cocks' rhetoric on the subject of AI mirrors that of other corporate types who are all giddy about it. Who couldn't care less that it is negatively impacting labour and the environment. Cocks was being loose with the truth when he said 'everyone' in the hobby was using it in some capacity for their games. Trying to get us all to embrace its presence. Can't imagine why he might do that! Just because someone is a CEO doesn't necessarily mean that someone is 'evil and shortsighted.' But even the most seemingly wonderful of businesspeople can turn into vicious worker-loathing bullies when there is more money to be made. I have seen this firsthand. It takes an extraordinary degree of naivety to believe Cocks isn't at all hoping attitudes towards the use of generative AI shift so they can meet more of those costs-and-returns goals you even seem to believe they prioritize above all else. And the fact Cocks is only ever singing its praises even given how highly detrimental is its use and consumption to the environment should give you some indication he is far from being on the opposite end of 'evil and shortsighted.' Cocks like so many CEOs right now is probably only reading whatever provides conformation for his own biases about AI. He is clearly reading the bus lit out there on the subject. It's the future! Its impact on the environment? Which is irrefutable. You won't ever hear him talking about that. He will only ever speak well of AI. I know the type. The president of the company for which I work is the same. And the way he also expects everyone around him to share his enthusiasm and to just block out the negative is behavior more becoming that of a cult leader.
We do know what's going on under the hood in LLMs in vastly greater detail than we do in actual brains. The code is comprehensible. The data -- the humongous vectors of token correlation values -- may not be due to its scale, but we know how they work. (For values of "we" that don't include either of us. I may have enough background that I could within a reasonable time frame ( although I know how bad my vector math is), but I've never gone beyond the high-level view.)
i am not in favor of AI in general.
The thinking class should not be in favor of it either. Its taking their jobs under the guise of a "thinking" tool. Scientists are our thinking tools.
Robots ( Russian for Slave) are already taking over the jobs of blue collar workers. Ai is just the robot of scientists, thinkers.
As for AI in D&D they will take the place of DM's for some simple missions. But will they eventually be creating the campaigns? Or will they just DM missions inside or greater campaigns?
Could this CEO think that they can create an AI that can make up the campaigns and thus get rid of all third party content? All DM's?
Just make a video game like some D&D WOW type. He obviously wants the micro transactions from a system like that. Monthly subscriptions and all.
The only use I have for AI is the replacement of CEOs.
Met D&D in 1976 while in the Navy, learning Russian, and fell in love. We were steady for several years, then drifted apart and saw other people. Ran into each other a few years ago and the spark kinda rekindled, ya know? It’s an open relationship and I hang with GURPS and DragonQuest, but we’re all mutuals and it’s good. Open communication is the real key to any polydice, it’s the only real rule.
You do realize that the people who are pushing for increased use of AI all heavily oppose UBI, right? Giving them more power and authority will not have the effect of making it easier to do stuff they hate.
What advances?
Find your own truth, choose your enemies carefully, and never deal with a dragon.
"Canon" is what's factual to D&D lore. "Cannon" is what you're going to be shot with if you keep getting the word wrong.
CEOs would be a lot easier (and cost effective) to replace than artists or engineers, etc. But, corporations are already all-consuming; putting glorified markov chains in charge of them would probably hasten our demise.
Livermore labs recently managed to output more energy through nuclear fusion than was put into the system.
Relation to generative AI: probably nonexistent (plenty of computer power involved, and there are some other AI tools that might be useful, but generative AI isn't really the right tool for the problem).
That has nothing to do with AI*, and "maybe fusion is actually going to work this time" doesn't solve the energy problems, since even if it works, it's years away from production, and decades from ubiquity.
* I suppose it's possible that machine learning was involved in some way, but that's not what people are talking about when they say "AI" these days. It's also an actually useful use of the tech, unlike generative systems.
I'm not the one 6thLyranGuard quoted. That is simply the only recent development with nuclear fusion I know of. [edit: it seems I was right]
Given that ChatGPT was released on Nov 30, 2022, the odds of it having anything to do with that advance is zero. This doesn't mean some sort of AI tool was not involved, people have been using learning models for physical modeling for a bit, but it's not generative AI.
... might ...
I think ...
And that's half the problem.
You expect the whole world to embrace the technology based on little more than mights and what you believe.
The enthusiasm for AI in tech media and bus lit has all the trappings of a cult.
Please elaborate how the enthusiasm for a technology is like a cult.
In David Goodhart's Head Hand Heart he talks about how even those working directly on its development are saying it will mostly displace workers whose work is of a 'cognitive' nature.
Some manual labour will always require a human touch. There will always be consumers who will choose a product among the choices available that has been made or grown by hand. Not just by a machine in a factory. And care work will always require a human touch.
It gladdens the hearts of CEOs pushing for its use in their companies to think they will no longer have to pay people to build spreadsheets or analyze data. Or that they will no longer need to hire creatives.
White-color workers are naively thinking it will just mean all the 'hard work' of their 'inferiors' can be done by robots. Their enthusiasm is heralding their own obsoletion.
To place your faith in AI is equally foolish.