AI might get rid of a lot of jobs, but it also might enable Universal Basic Income.
You do realize that the people who are pushing for increased use of AI all heavily oppose UBI, right? Giving them more power and authority will not have the effect of making it easier to do stuff they hate.
AI does use a crazy amount of energy, but it also enabled advances in Nuclear Fusion. I think it will solve its own energy issue.
What advances?
Livermore labs recently managed to output more energy through nuclear fusion than was put into the system.
That has nothing to do with AI*, and "maybe fusion is actually going to work this time" doesn't solve the energy problems, since even if it works, it's years away from production, and decades from ubiquity.
* I suppose it's possible that machine learning was involved in some way, but that's not what people are talking about when they say "AI" these days. It's also an actually useful use of the tech, unlike generative systems.
It may not be what _you_ are talking about when you talk about AI, but it is clearly what other people mean.
By the way, I guess I missed it, but when did you get your graduate degree in AI or any closely related field? My graduate work was in Systems Architecture and Engineering at Viterbi. It’d be nice to confirm that you actually have relevant graduate education in the field before you do anything as brash as telling people they are using the wrong language.
2.) The advance done on December 5, 2022 at the Lawrence Livermore National Laboratory (LLNL) in California was achieved due to AI
Given that ChatGPT was released on Nov 30, 2022, the odds of it having anything to do with that advance is zero. This doesn't mean some sort of AI tool was not involved, people have been using learning models for physical modeling for a bit, but it's not generative AI.
2.) The advance done on December 5, 2022 at the Lawrence Livermore National Laboratory (LLNL) in California was achieved due to AI
Given that ChatGPT was released on Nov 30, 2022, the odds of it having anything to do with that advance is zero. This doesn't mean some sort of AI tool was not involved, people have been using learning models for physical modeling for a bit, but it's not generative AI.
The only use I have for AI is the replacement of CEOs.
In David Goodhart's Head Hand Heart he talks about how even those working directly on its development are saying it will mostly displace workers whose work is of a 'cognitive' nature.
Some manual labour will always require a human touch. There will always be consumers who will choose a product among the choices available that has been made or grown by hand. Not just by a machine in a factory. And care work will always require a human touch.
It gladdens the hearts of CEOs pushing for its use in their companies to think they will no longer have to pay people to build spreadsheets or analyze data. Or that they will no longer need to hire creatives.
White-color workers are naively thinking it will just mean all the 'hard work' of their 'inferiors' can be done by robots. Their enthusiasm is heralding their own obsoletion.
The only use I have for AI is the replacement of CEOs.
In David Goodhart's Head Hand Heart he talks about how even those working directly on its development are saying it will mostly displace workers whose work is of a 'cognitive' nature.
Some manual labour will always require a human touch. There will always be consumers who will choose a product among the choices available that has been made or grown by hand. Not just by a machine in a factory. And care work will always require a human touch.
It gladdens the hearts of CEOs pushing for its use in their companies to think they will no longer have to pay people to build spreadsheets or analyze data. Or that they will no longer need to hire creatives.
White-color workers are naively thinking it will just mean all the 'hard work' of their 'inferiors' can be done by robots. Their enthusiasm is heralding their own obsoletion.
That’s not true. I know _many_ white-collar workers who know AI will do many things the white-collar workers are doing.
That’s why so many AI experts support Universal Basic Income.
Sure, but generative AI is what all the buzz is about. In the gaming space, better quality AIs than what video games actually use have been available for decades (the earliest application of AI to RPGs that I know of was using Eurisko to beat Trillion Credit Squadron, back in 1981), and people don't even bother.
Folks. Listen. Executives at cigarette companies don't smoke cigarettes. Executives for McDonald's don't eat at McDonald's. And executives at Hasbro don't play D&D. The only reason Hasbro bought D&D is to make money. Period. They saw an intellectual property that their analysts considered to be "under-capitalized", and they fully intend to capitalize on it. If that means using A.I., then that's what they'll do. If that means repackaging existing material (with a few little tweaks here and there) so they can justify charging you $60 a pop for a whole new line of books, then that's what they'll do. If that means micro-transactions, then that's what they'll do. If that means laying off the experienced creatives who build this game up from an obscure niche hobby to a global powerhouse, then that's what they'll do. All that matters to the C-Suite is the bottom line. It's not about "good vs evil", it's not about "characters vs monsters", it's only about "profit vs loss". A.I. is a genie that cannot be put back into the bottle. I don't like it, you may not like it, so all we can do is focus on our game at our table and try to preserve the things that made D&D great to begin with.
Thank you for attending my Ted Talk. Parking will not be validated.
Sustainability is irrelevant. Sustainability only matters if you're planning long term. Hasbro, like nearly every corporation in America today, has completely and utterly given up on long term planning. Every company today is running on a hedge fund mentality. All that matters is today's profit. They only care that this quarter's profit is higher than last quarter's profit. They only care that this year's sales are higher than last year's sales. Next quarter, or next year, won't matter to them until next quarter or next year. And if the market takes a dip before then, they will happily burn it all down, cash out, and invest that money in the next thing coming down the pike.
AI is a new toy. Executives are like children. They all want to play with the newest toy.
Sustainability is irrelevant. Sustainability only matters if you're planning long term. Hasbro, like nearly every corporation in America today, has completely and utterly given up on long term planning. Every company today is running on a hedge fund mentality. All that matters is today's profit. They only care that this quarter's profit is higher than last quarter's profit. They only care that this year's sales are higher than last year's sales. Next quarter, or next year, won't matter to them until next quarter or next year. And if the market takes a dip before then, they will happily burn it all down, cash out, and invest that money in the next thing coming down the pike.
AI is a new toy. Executives are like children. They all want to play with the newest toy.
This is true.
But my point about sustainability was about OpenAI.
Folks. Listen. Executives at cigarette companies don't smoke cigarettes. Executives for McDonald's don't eat at McDonald's. And executives at Hasbro don't play D&D. The only reason Hasbro bought D&D is to make money. Period. They saw an intellectual property that their analysts considered to be "under-capitalized", and they fully intend to capitalize on it. If that means using A.I., then that's what they'll do. If that means repackaging existing material (with a few little tweaks here and there) so they can justify charging you $60 a pop for a whole new line of books, then that's what they'll do. If that means micro-transactions, then that's what they'll do. If that means laying off the experienced creatives who build this game up from an obscure niche hobby to a global powerhouse, then that's what they'll do. All that matters to the C-Suite is the bottom line. It's not about "good vs evil", it's not about "characters vs monsters", it's only about "profit vs loss". A.I. is a genie that cannot be put back into the bottle. I don't like it, you may not like it, so all we can do is focus on our game at our table and try to preserve the things that made D&D great to begin with.
Thank you for attending my Ted Talk. Parking will not be validated.
I agree with much of what you have said there.
Except this: A.I. is a genie that cannot be put back into the bottle.
And why?
Because its development is not profitable.
OpenAI is pouring more money into the development of it than it is making from it. For every $2.35 they spend on it they make $1.
If it's all about 'profits versus loss' then the development of AI is just not sustainable.
Companies want to use it if profits matter more to them than anything else.
But there will come a point when OpenAI realize it's stupid to burn tens of billions of dollars.
And for what? So people with too much time on their hands can have it make memes for them?
These companies are predicting future revenue due to ai development.
These companies are predicting future revenue due to ai development.
Let's hope for their sake their crystal balls work better than does something like ChatGPT.
And in case you missed my answer to your question:
Because it takes a cult-like level of dissonance to one minute be talking about oh-how-so-much you 'care' about the environment and then the next to be cheering and applauding the use and development of AI. Talking endlessly about how it might provide solutions all the while turning a blind eye to what it is doing.
Talking about how you 'care' about working people knowing very well it is predicted to displace upwards of 300 million workers globally.
That is how members of cults 'think.'
Their actions are at direct odds with their claims.
At the educational institution for which I work the only people enthusiastic about AI are those who almost exclusively read bus lit and industry news. The rest of us who read widely remain highly skeptical and not just because of what we read. We have already been witness to utter failures in its use for what we do. But the enthusiasm remains and those in charge sit like emotionless robots and ignore any concerns about those failings.
These companies are predicting future revenue due to ai development.
Let's hope for their sake their crystal balls work better than does something like ChatGPT.
And in case you missed my answer to your question:
Because it takes a cult-like level of dissonance to one minute be talking about oh-how-so-much you 'care' about the environment and then the next to be cheering and applauding the use and development of AI. Talking endlessly about how it might provide solutions all the while turning a blind eye to what it is doing.
Talking about how you 'care' about working people knowing very well it is predicted to displace upwards of 300 million workers globally.
That is how members of cults 'think.'
Their actions are at direct odds with their claims.
At the educational institution for which I work the only people enthusiastic about AI are those who almost exclusively read bus lit and industry news. The rest of us who read widely remain highly skeptical and not just because of what we read. We have already been witness to utter failures in its use for what we do. But the enthusiasm remains and those in charge sit like emotionless robots and ignore any concerns about those failings.
That is the behavior of a cult.
I cannot speak for the mentalities of other people I know nothing about.
Rollback Post to RevisionRollBack
To post a comment, please login or register a new account.
I'm not the one 6thLyranGuard quoted. That is simply the only recent development with nuclear fusion I know of. [edit: it seems I was right]
It may not be what _you_ are talking about when you talk about AI, but it is clearly what other people mean.
By the way, I guess I missed it, but when did you get your graduate degree in AI or any closely related field? My graduate work was in Systems Architecture and Engineering at Viterbi. It’d be nice to confirm that you actually have relevant graduate education in the field before you do anything as brash as telling people they are using the wrong language.
Given that ChatGPT was released on Nov 30, 2022, the odds of it having anything to do with that advance is zero. This doesn't mean some sort of AI tool was not involved, people have been using learning models for physical modeling for a bit, but it's not generative AI.
... might ...
I think ...
And that's half the problem.
You expect the whole world to embrace the technology based on little more than mights and what you believe.
The enthusiasm for AI in tech media and bus lit has all the trappings of a cult.
Nobody has a crystal ball.
If anybody tells you what _will_ happen (pro or con) in the future, run away.
Please elaborate how the enthusiasm for a technology is like a cult.
But ChatGPT is NOT the only thing that is AI.
Hell, it isn’t even the only LLM.
****, it isn’t even the only LLM on its website.
In David Goodhart's Head Hand Heart he talks about how even those working directly on its development are saying it will mostly displace workers whose work is of a 'cognitive' nature.
Some manual labour will always require a human touch. There will always be consumers who will choose a product among the choices available that has been made or grown by hand. Not just by a machine in a factory. And care work will always require a human touch.
It gladdens the hearts of CEOs pushing for its use in their companies to think they will no longer have to pay people to build spreadsheets or analyze data. Or that they will no longer need to hire creatives.
White-color workers are naively thinking it will just mean all the 'hard work' of their 'inferiors' can be done by robots. Their enthusiasm is heralding their own obsoletion.
To place your faith in AI is equally foolish.
As foolish as putting your faith in anything else. But try living your life putting faith in nothing.
That’s not true. I know _many_ white-collar workers who know AI will do many things the white-collar workers are doing.
That’s why so many AI experts support Universal Basic Income.
To place your faith in anything is foolish, but try putting your faith in nothing.
Sure, but generative AI is what all the buzz is about. In the gaming space, better quality AIs than what video games actually use have been available for decades (the earliest application of AI to RPGs that I know of was using Eurisko to beat Trillion Credit Squadron, back in 1981), and people don't even bother.
Folks. Listen. Executives at cigarette companies don't smoke cigarettes. Executives for McDonald's don't eat at McDonald's. And executives at Hasbro don't play D&D. The only reason Hasbro bought D&D is to make money. Period. They saw an intellectual property that their analysts considered to be "under-capitalized", and they fully intend to capitalize on it. If that means using A.I., then that's what they'll do. If that means repackaging existing material (with a few little tweaks here and there) so they can justify charging you $60 a pop for a whole new line of books, then that's what they'll do. If that means micro-transactions, then that's what they'll do. If that means laying off the experienced creatives who build this game up from an obscure niche hobby to a global powerhouse, then that's what they'll do. All that matters to the C-Suite is the bottom line. It's not about "good vs evil", it's not about "characters vs monsters", it's only about "profit vs loss". A.I. is a genie that cannot be put back into the bottle. I don't like it, you may not like it, so all we can do is focus on our game at our table and try to preserve the things that made D&D great to begin with.
Thank you for attending my Ted Talk. Parking will not be validated.
Anzio Faro. Protector Aasimar light cleric. Lvl 18.
Viktor Gavriil. White dragonborn grave cleric. Lvl 20.
Ikram Sahir ibn-Malik al-Sayyid Ra'ad. Brass dragonborn draconic sorcerer Lvl 9. Fire elemental devil.
Wrangler of cats.
Sustainability is irrelevant. Sustainability only matters if you're planning long term. Hasbro, like nearly every corporation in America today, has completely and utterly given up on long term planning. Every company today is running on a hedge fund mentality. All that matters is today's profit. They only care that this quarter's profit is higher than last quarter's profit. They only care that this year's sales are higher than last year's sales. Next quarter, or next year, won't matter to them until next quarter or next year. And if the market takes a dip before then, they will happily burn it all down, cash out, and invest that money in the next thing coming down the pike.
AI is a new toy. Executives are like children. They all want to play with the newest toy.
Anzio Faro. Protector Aasimar light cleric. Lvl 18.
Viktor Gavriil. White dragonborn grave cleric. Lvl 20.
Ikram Sahir ibn-Malik al-Sayyid Ra'ad. Brass dragonborn draconic sorcerer Lvl 9. Fire elemental devil.
Wrangler of cats.
This is true.
But my point about sustainability was about OpenAI.
'Today's profit' for them looks bad.
They lose money with every prompt.
These companies are predicting future revenue due to ai development.
Let's hope for their sake their crystal balls work better than does something like ChatGPT.
And in case you missed my answer to your question:
Because it takes a cult-like level of dissonance to one minute be talking about oh-how-so-much you 'care' about the environment and then the next to be cheering and applauding the use and development of AI. Talking endlessly about how it might provide solutions all the while turning a blind eye to what it is doing.
Talking about how you 'care' about working people knowing very well it is predicted to displace upwards of 300 million workers globally.
That is how members of cults 'think.'
Their actions are at direct odds with their claims.
At the educational institution for which I work the only people enthusiastic about AI are those who almost exclusively read bus lit and industry news. The rest of us who read widely remain highly skeptical and not just because of what we read. We have already been witness to utter failures in its use for what we do. But the enthusiasm remains and those in charge sit like emotionless robots and ignore any concerns about those failings.
That is the behavior of a cult.
The easy & pervasive activation of Hallucinations in genAi are proof that the tech is too flawed to be trustworthy.
It'll get better? Remember NFTs, The Metaverse, & most crypto not connected to people in power?
Those NEVER got better, despite the assurances by their backers.
& quoting Elon Musk is...questionable, to say the least.
But something I DO agree with...Chris Cocks doesn't play D&D, & it shows.
DM, player & homebrewer(Current homebrew project is an unofficial conversion of SBURB/SGRUB from Homestuck into DND 5e)
Once made Maxwell's Silver Hammer come down upon Strahd's head to make sure he was dead.
Always study & sharpen philosophical razors. They save a lot of trouble.
I cannot speak for the mentalities of other people I know nothing about.