Study: ChatGPT is Bad for Your Brain
^^^ I admit gamification means different things.
In the context of university course study, students have several approaches but it boils down to:
A. - I keep reading this stuff but I don't get it? I know I'm going to fail/scrape
- I thought I read enough to my own satisfaction but my mark doesn't reflect my effort
Students in the above category are either going to fail or scrape a pass
B. the following use LLMs in some capacity
i) - I go too much going on and left myself little time so I'm going to use chatgpt to produce my assignments
ii) - I am going to use my time efficiently only just enough to master my course and learn to use LLMs as a tool to find resources and polish/refine my work
the above is the majority of students
C. Finally a minority - I love my course and will actively learn above and beyond prescribed reading and I will master LLMs to understand how to find sources and to help polish and refine my work
Category B.ii) and C. use LLMs in a smart efficient way and do well. Category B i) and ii) use game theory based on their priorities play the system to their advantage.
No, gamification doesn't mean different things. It means a very specific thing. That thing is not what you've described. You're just muddling up several different concepts that have the word "game" in them, and you're not really using any of them entirely correctly. You're kinda close with your definition of game theory, but anyone could google that, and you've still applied it wrong.
Putting made-up situations of your own imagining in something resembling a proper outline format doesn't make them more factual. It's just window-dressing.
I love how you've built a scenario where either people don't use bots and fail, or do use bots and succeed - dust your hands off, just that simple. One wonders from where you're pulling these ideas. It's a wonderful fantasy, but what's it based on?
You can toss around the words, and put together a nice story, but you still don't seem to know what those concepts mean, or how to correctly define and use them. Back to Bloom's work, knowing does not mean you understand - understanding does not mean you can apply it - being able to apply it does not mean you know how to assess it. You're all the way up to trying to apply it, but you're still back at barely "knowing", and not quite at "understanding". That's not how mastery works, and that's part of the point Bloom was trying to make.
Even if I did give you credit for knowing what they mean, it sounds more like you're trying to disguise a form of cheating as being "smart" on the grounds that it gives you an advantage in order to succeed - which sounds like a game hacker claiming they only hacked the game cos "everyone else does", and that anyone who's good uses hax too, and they had to, to win - or otherwise making it sound innocent - "using an aim bot is just having a better tool, like buying a better keyboard!"
Using a bot to write for you just sounds like exploiting the loophole that plagiarizing is copying someONE, but using a bot is copying someTHING, and good luck proving it anyways. It's really not so different than buying a paper someone else wrote - except each copy is slightly different so you don't get caught plagiarizing - and then claiming you're still learning on the grounds that you can study that better version you didn't write, and somehow learn something from it.
The problem is, you don't lean anything from that. You would have learned that stuff in the process of editing and improving it yourself. But you gave that work to the bot. The bot got better at it, not you. You're just taking credit for it, stamping your name on the result.
You do realise that most higher institutions have already moved toward AI-collaborative tools right. Your concept of AI as a form of "cheating" is already redundant. LLMs have become just another of the higher education system learning tool. It's like calling a student a cheat for using a calculator or statistical software instead of doing hand calculations

Respectfully please stop embarrassing yourself, Bloom's taxonomy was developed in 1956, long before the computer age. the principles were only meant to provide a (very) basic framework for categorizing educational learning objectives into levels of complexity and specificity, from basic recall to higher-order thinking skills like evaluation and creation. Its main purpose was to provide a common language for educators to discuss and develop effective learning experiences and assessments.
In 2025 and beyond mastery of concepts, theories and skills happens in a completely different learning landscape and LLMs are changing how information is gathered and reported. I know you are not actually interested in having a cordial discussion on this topic and have focused on how much smarter and clever you are than me


You're just making claims without proof. Even if my goal was to "look smarter than you", that would have no bearing on the matter, other than taking a swipe at me. Regardless of whether I'm "showing off" or not, either I'm wrong, or I'm right. And I'm not even saying "I'm right" so much as "you're wrong".
They still teach Bloom in the 21st century (and you're the one that brought him up in the first place). New tech may have changed the medium by which people can learn, but it hasn't changed the progression of how things are learned. His work is still just as relevant today. Learn, understand, apply, assess, create. I don't see how tech changed that relationship. You can't really apply something if you haven't learned it yet. Bots might have helped you learn, understand, and apply it, to varying degrees of success, but things still tend to generally go in that order.
The caveat being yes, you can ask a bot to tell you the volume of a sphere of a given size, and it can spit out an answer without you having to know anything about radii or pi or geometry or any math at all - but you still don't know anything about math, and being given the answer didn't help you learn the process. You haven't learned, you've just produced an answer.
As such, tech hasn't changed the learning process, so much as it's the tech doing the learning, not the user. Even AI and bots follow the same learning procedure. Data is given, data is verified, data is applied, data is assessed, new data is created. That's still Bloom. The process hasn't changed, the user just offloaded the process to an external entity, and claims the credit. It's pretend mastery.
Using a calculator, or statistical analysis software, is cheating, if you've been told not to use it. Whether a calculator, notes, books, the internet, sometimes you're allowed to use it, and sometimes you're not - and there's a reason for that. Yes, schools do have AI tools in place to help people - and they also have rules in place, too. Spell-check is one thing. Having a bot write the whole thing for you is something different. There comes a point where the proficiency is being demonstrated more by the bot than by the user.
Tools like calculators or ANOVA can be used two main ways. Their primary function was to allow people who already knew how to perform these operations, to be able to expedite the process when it needed to be done dozens or hundreds or thousands of times - to perform rote tasks that the user already knew how to do, the machine simply did them faster, and removed some of the burden from the human calculator.
However, even someone who knows nothing about math can download ANOVA, google how to input data, google some data to input, google how to interpret the answer, and feel like they've "learned" something, without understanding anything they've done, or actually learning anything from it - but suddenly they think they know how to do statistics. Being able to input variables that were given to you into an equation that someone else sets up for you, and then solving it with a calculator, is not the same as being capable of creating the equation yourself, with your own variables, and solving it by hand.
The reason tools like ANOVA exist is cos statistics would require equations to be solved a minimum of 30 times or more, and doing so would essentially be "chopping wood", repetitive tedium, to someone who knew how to do the equations. Same for a calculator. You know how to do the math, but doing 50 equations at a time gets tedious. They weren't meant to take the place of knowing how to do it for yourself - even if they technically can.
Spellcheck was invented to help catch typos, but now it's an excuse to not even bother to remember to spell. Contact lists were meant for remembering more contacts than was realistically feasible, but people still remembered important numbers - now people don't even know their own mother's phone number without looking it up.
AI aggregation was first used to analyze and summarize massive caches of data that were too big for a human to even begin to approach, and by the time it was solved, it would be out of date - but the humans still knew how to do the equations - they just had the bots do it at larger scale. Now people use AI to read a few articles that they can't be bothered to read for themselves.
It's one thing to use AI to do stuff nobody could possibly have done in the first place. It's something else to use it to do things you're more than capable of doing yourself, you just don't want to bother spending the time doing it, and want a quick easy solution - or to use it as a prop to make it look like you're better at something than you actually are.
The bot may have allowed you to skip directly from knowing to applying, but not cos you learned or mastered anything, the bot just did it for you. You learned an answer to a question, and you learned how to operate a tool, but you still don't actually understand what you're doing. And that matters.
In the instance of college, the point is to see if YOU can perform certain work, not whether a bot can. So yeah, a masters degree class might let you use ANOVA, but that's cos you're supposed to have already learned how to do stats by hand before that, and you probably had to pass a stats class to get into the class that lets you use ANOVA.
Even using AI to be "more efficient" in things you can already do, when you over-rely on the thing to do it for you, you gradually lose your ability to do it yourself. You get "rusty" and have to practice to get good at it again. Takes time to get it back. If you even go to the trouble to do it, rather than continuing to rely on the bot.
Continuing that theme, it's one thing to ask an AI to do a thousand calculations for you, cos it would take an unreasonable amount of time to do it, and you already know how to. It's something else to ask AI to do a calculation for you just cos you don't know how to do it, and don't feel like learning how, cos you don't do math much to begin with, and you have a calculator, so why learn how to do it the hard way anyways.
The more you rely on something to do something for you, even seemingly small things, the more you erode your own ability to do them for yourself. It's a bit like booze. Small amounts probably won't do noticeable damage, but it still does what it does, and overuse has consequences. This is as true for chatgpt as it is for anything else.
AI will often not be as effective as a human doing the same job. It won't always be more accurate.
https://unu.edu/article/ai-not-high-pre ... world-work
And it definitely won't always be fairer or more reliable. But it may still be used whenever it has an advantage over humans in one of four dimensions:
1. speed,
AI can do any job blazingly fast, a capability with important industrial applications. AI-based software is used to enhance satellite and remote sensing data, to compress video files, to make video games run better with cheaper hardware and less energy, to help robots make the right movements, and to model turbulence to help build better internal combustion engines.
2. scale,
AI can access tonnes of data and be in millions of places simultaneously (hive mind).
3. scope
AI already does more things than any one person could, even when a human might do better at any one of those tasks. Generative AI systems such as ChatGPT can engage in conversation on any topic, write an essay espousing any position, create poetry in any style and language, write computer code in any programming language, and more. These models may not be superior to skilled humans at any one of these things, but no single human could outperform top-tier generative models across them all. Hence why higher education institutions are already allowing students to use them
4. sophistication
AIs can consider more factors in their decisions than humans can, and this can endow them with superhuman performance on specialized tasks. Deep learning systems built from many-layered neural networks take account of complex interactions – often many billions – among many factors.
Anyway, like it or not, point 3 - both universities and workplaces permit "ethical use" of LLMs because LLMs have scope as tools and what is deemed "ethical" or "appropriate" will change over time based on a changing workplace landscape and improvement in technology.
funeralxempire
Veteran

Joined: 27 Oct 2014
Age: 40
Gender: Non-binary
Posts: 33,503
Location: Right over your left shoulder
I'm saying this out of a desire to explain the problem, even though the explanation won't be flattering. I apologize in advance for not being able to word this in a more flattering manner...
That said, you have a tendency to get into arguments on topics you demonstrably don't quite understand and then making responses that don't really address the argument being made, and then hone in on failures to address those unrelated (at least not beyond word association levels of relatedness) tangents even though they don't really relate to the core argument.
The end result being the other party becomes frustrated and questions your motives (trolling), your intellect or your reading comprehension.
Trying to explain to someone why they're either wrong or not even wrong can be frustrating if the other party insists on arguing back and trying to win points because they're worried the first person is trying to upstage them.
Not every response that questions your understanding of the topic is a personal attack that requires being defended against. Sometimes it's just a correction within the ongoing discussion, intended to keep the discussion on topic. Sometimes you're better off asking follow-up questions or for a better explanation of what you're intending on refuting, instead of immediately attempting to refute some aspect of a response you're worried was intended to make you look stupid.
_________________
The Party told you to reject the evidence of your eyes and ears. It was their final, most essential command.
If you're not careful, the newspapers will have you hating the people who are being oppressed, and loving the people who are doing the oppressing. —Malcolm X
Just a reminder: under international law, an occupying power has no right of self-defense, and those who are occupied have the right and duty to liberate themselves by any means possible.
With respect, I suggest you read your post here -
viewtopic.php?f=1&t=427939&start=32#p9712995 and in particular, noting the unnecessary snark attack at the end in response to a dry, factual explanation - in light of uncommondenominator's post here -
viewtopic.php?f=1&t=427939&start=32#p9713517
I have to ask, and with no implied criticism - who is struggling?
_________________
Giraffe: a ruminant with a view.
funeralxempire
Veteran

Joined: 27 Oct 2014
Age: 40
Gender: Non-binary
Posts: 33,503
Location: Right over your left shoulder
It's not intended to be.
_________________
The Party told you to reject the evidence of your eyes and ears. It was their final, most essential command.
If you're not careful, the newspapers will have you hating the people who are being oppressed, and loving the people who are doing the oppressing. —Malcolm X
Just a reminder: under international law, an occupying power has no right of self-defense, and those who are occupied have the right and duty to liberate themselves by any means possible.
funeralxempire
Veteran

Joined: 27 Oct 2014
Age: 40
Gender: Non-binary
Posts: 33,503
Location: Right over your left shoulder
One person's insecurities over appearing upstaged might have created that appearance, but it's less a dick measuring contest and more one guy shouting random measurements under the false impression that it's a dick measuring contest.
_________________
The Party told you to reject the evidence of your eyes and ears. It was their final, most essential command.
If you're not careful, the newspapers will have you hating the people who are being oppressed, and loving the people who are doing the oppressing. —Malcolm X
Just a reminder: under international law, an occupying power has no right of self-defense, and those who are occupied have the right and duty to liberate themselves by any means possible.
Similar Topics | |
---|---|
What happens when our brain goes blank |
09 Jun 2025, 10:57 pm |
Another study finds no vaccine link |
26 Jun 2025, 9:21 pm |
Study Reveals Wide Gap in Awareness of AAC Devices |
20 May 2025, 6:01 pm |
Billy Joel diagnosed with brain disorder |
23 May 2025, 2:49 pm |