Page 3 of 3 [ 41 posts ]  Go to page Previous  1, 2, 3

uncommondenominator
Veteran
Veteran

Joined: 8 Aug 2019
Age: 43
Gender: Male
Posts: 1,585

Yesterday, 2:33 pm

cyberdora wrote:
uncommondenominator wrote:
As for the "smarter" students using LLMs more efficiently, a nice claim, but it's usually the mediocre students that leverage LLMs so they can keep up with the work. Either way, if a student is having a bot write a paper for them, give the grade and the diploma to the bot. The student may have turned it in, but the bot did all the work.


It's a fact I'm afraid. Remember the students are simply gamifying the system to work smart. Students who choose to not use LLMs may achieve high grades but they have to put in an order of magnitude effort to get there.


You've mentioned "gamifying" a few times now, but still haven't explained how or to what end. It's just "they gamify it!", but no additional details. It's almost like you're playing with words you aren't familiar with. (Hint: you're not using the term correctly - it seems like you're confused between two very similar terms, that mean different things)

Again, it was the "mediocre" students who had to rely on bots, cos it would have taken them an "order of magnitude" of additional effort just to keep up, if they didn't rely on bots.

The idea that the only way to get high grades is to either work "orders of magnitude" harder, or cheat using AI, is... how to say this politely... it speaks more of your own personal struggles than anything else.

It's almost as if your claim has the built-in assumption that nobody could possibly get good grades in college w/o working 10x harder than other people. And yet, I've seen it happen pretty regularly. Most people just have to put in "the work" and not exert "ten times the work", to get a B or an A. The people that did have to put in "magnitudes" of work, were usually the slow kids who were in over their head, and needed to put in extra work to keep up. Using a bicycle to win the footrace may be "clever" and probably would work, but you're still not any good at running, you just have an award you didn't earn.

Same goes for school. Using a bot may make it "easier" and "save some time", and may even make you look a little "smart" - and in that sense, cheating is "smarter" than "working hard" cos it's "easier", I guess, but you're still not actually smart. You can show off your "smarts" like a performance art, but if anything takes you "off script", you're just gonna trip all over yourself. You can recite what google feeds you, but you certainly can't wield it as your own.



cyberdora
Veteran
Veteran

User avatar

Joined: 12 Jan 2025
Gender: Non-binary
Posts: 2,065
Location: Australia

Yesterday, 6:13 pm

^^^ I admit gamification means different things.

In the context of university course study, students have several approaches but it boils down to:
A. - I keep reading this stuff but I don't get it? I know I'm going to fail/scrape
- I thought I read enough to my own satisfaction but my mark doesn't reflect my effort
Students in the above category are either going to fail or scrape a pass

B. the following use LLMs in some capacity
i) - I go too much going on and left myself little time so I'm going to use chatgpt to produce my assignments
ii) - I am going to use my time efficiently only just enough to master my course and learn to use LLMs as a tool to find resources and polish/refine my work
the above is the majority of students

C. Finally a minority - I love my course and will actively learn above and beyond prescribed reading and I will master LLMs to understand how to find sources and to help polish and refine my work

Category B.ii) and C. use LLMs in a smart efficient way and do well. Category B i) and ii) use game theory based on their priorities play the system to their advantage.



uncommondenominator
Veteran
Veteran

Joined: 8 Aug 2019
Age: 43
Gender: Male
Posts: 1,585

Today, 12:26 am

:lmao:

No, gamification doesn't mean different things. It means a very specific thing. That thing is not what you've described. You're just muddling up several different concepts that have the word "game" in them, and you're not really using any of them entirely correctly. You're kinda close with your definition of game theory, but anyone could google that, and you've still applied it wrong.

Putting made-up situations of your own imagining in something resembling a proper outline format doesn't make them more factual. It's just window-dressing.

I love how you've built a scenario where either people don't use bots and fail, or do use bots and succeed - dust your hands off, just that simple. One wonders from where you're pulling these ideas. It's a wonderful fantasy, but what's it based on?

You can toss around the words, and put together a nice story, but you still don't seem to know what those concepts mean, or how to correctly define and use them. Back to Bloom's work, knowing does not mean you understand - understanding does not mean you can apply it - being able to apply it does not mean you know how to assess it. You're all the way up to trying to apply it, but you're still back at barely "knowing", and not quite at "understanding". That's not how mastery works, and that's part of the point Bloom was trying to make.

Even if I did give you credit for knowing what they mean, it sounds more like you're trying to disguise a form of cheating as being "smart" on the grounds that it gives you an advantage in order to succeed - which sounds like a game hacker claiming they only hacked the game cos "everyone else does", and that anyone who's good uses hax too, and they had to, to win - or otherwise making it sound innocent - "using an aim bot is just having a better tool, like buying a better keyboard!"

Using a bot to write for you just sounds like exploiting the loophole that plagiarizing is copying someONE, but using a bot is copying someTHING, and good luck proving it anyways. It's really not so different than buying a paper someone else wrote - except each copy is slightly different so you don't get caught plagiarizing - and then claiming you're still learning on the grounds that you can study that better version you didn't write, and somehow learn something from it.

The problem is, you don't lean anything from that. You would have learned that stuff in the process of editing and improving it yourself. But you gave that work to the bot. The bot got better at it, not you. You're just taking credit for it, stamping your name on the result.



cyberdora
Veteran
Veteran

User avatar

Joined: 12 Jan 2025
Gender: Non-binary
Posts: 2,065
Location: Australia

Today, 2:59 am

uncommondenominator wrote:
Even if I did give you credit for knowing what they mean, it sounds more like you're trying to disguise a form of cheating as being "smart" on the grounds that it gives you an advantage in order to succeed - which sounds like a game hacker claiming they only hacked the game cos "everyone else does", and that anyone who's good uses hax too, and they had to, to win - or otherwise making it sound innocent - "using an aim bot is just having a better tool, like buying a better keyboard!"


You do realise that most higher institutions have already moved toward AI-collaborative tools right. Your concept of AI as a form of "cheating" is already redundant. LLMs have become just another of the higher education system learning tool. It's like calling a student a cheat for using a calculator or statistical software instead of doing hand calculations :lol: .



cyberdora
Veteran
Veteran

User avatar

Joined: 12 Jan 2025
Gender: Non-binary
Posts: 2,065
Location: Australia

Today, 3:11 am

uncommondenominator wrote:
You can toss around the words, and put together a nice story, but you still don't seem to know what those concepts mean, or how to correctly define and use them. Back to Bloom's work, knowing does not mean you understand - understanding does not mean you can apply it - being able to apply it does not mean you know how to assess it. You're all the way up to trying to apply it, but you're still back at barely "knowing", and not quite at "understanding". That's not how mastery works, and that's part of the point Bloom was trying to make.


Respectfully please stop embarrassing yourself, Bloom's taxonomy was developed in 1956, long before the computer age. the principles were only meant to provide a (very) basic framework for categorizing educational learning objectives into levels of complexity and specificity, from basic recall to higher-order thinking skills like evaluation and creation. Its main purpose was to provide a common language for educators to discuss and develop effective learning experiences and assessments.

In 2025 and beyond mastery of concepts, theories and skills happens in a completely different learning landscape and LLMs are changing how information is gathered and reported. I know you are not actually interested in having a cordial discussion on this topic and have focused on how much smarter and clever you are than me :roll: Ok fine, you are smarter :lol:



Cornflake
Administrator
Administrator

User avatar

Joined: 30 Oct 2010
Gender: Male
Posts: 70,570
Location: Over there

Today, 9:15 am

 ! Cornflake wrote:
Some comments here are getting a little too personal with the snark and derailing the thread even further.

Please have at the topic, not each other.


_________________
Giraffe: a ruminant with a view.


cyberdora
Veteran
Veteran

User avatar

Joined: 12 Jan 2025
Gender: Non-binary
Posts: 2,065
Location: Australia

Today, 6:33 pm

^^^ I am defending myself, the person I was responding to has a history of "getting personal" specifically targeting me with personal/snarky comments, I never engage in personal attacks on them and try my best to be civil and stay on topic. I will not respond to them in future.



uncommondenominator
Veteran
Veteran

Joined: 8 Aug 2019
Age: 43
Gender: Male
Posts: 1,585

Today, 6:44 pm

You're just making claims without proof. Even if my goal was to "look smarter than you", that would have no bearing on the matter, other than taking a swipe at me. Regardless of whether I'm "showing off" or not, either I'm wrong, or I'm right. And I'm not even saying "I'm right" so much as "you're wrong".

They still teach Bloom in the 21st century (and you're the one that brought him up in the first place). New tech may have changed the medium by which people can learn, but it hasn't changed the progression of how things are learned. His work is still just as relevant today. Learn, understand, apply, assess, create. I don't see how tech changed that relationship. You can't really apply something if you haven't learned it yet. Bots might have helped you learn, understand, and apply it, to varying degrees of success, but things still tend to generally go in that order.

The caveat being yes, you can ask a bot to tell you the volume of a sphere of a given size, and it can spit out an answer without you having to know anything about radii or pi or geometry or any math at all - but you still don't know anything about math, and being given the answer didn't help you learn the process. You haven't learned, you've just produced an answer.

As such, tech hasn't changed the learning process, so much as it's the tech doing the learning, not the user. Even AI and bots follow the same learning procedure. Data is given, data is verified, data is applied, data is assessed, new data is created. That's still Bloom. The process hasn't changed, the user just offloaded the process to an external entity, and claims the credit. It's pretend mastery.

Using a calculator, or statistical analysis software, is cheating, if you've been told not to use it. Whether a calculator, notes, books, the internet, sometimes you're allowed to use it, and sometimes you're not - and there's a reason for that. Yes, schools do have AI tools in place to help people - and they also have rules in place, too. Spell-check is one thing. Having a bot write the whole thing for you is something different. There comes a point where the proficiency is being demonstrated more by the bot than by the user.

Tools like calculators or ANOVA can be used two main ways. Their primary function was to allow people who already knew how to perform these operations, to be able to expedite the process when it needed to be done dozens or hundreds or thousands of times - to perform rote tasks that the user already knew how to do, the machine simply did them faster, and removed some of the burden from the human calculator.

However, even someone who knows nothing about math can download ANOVA, google how to input data, google some data to input, google how to interpret the answer, and feel like they've "learned" something, without understanding anything they've done, or actually learning anything from it - but suddenly they think they know how to do statistics. Being able to input variables that were given to you into an equation that someone else sets up for you, and then solving it with a calculator, is not the same as being capable of creating the equation yourself, with your own variables, and solving it by hand.

The reason tools like ANOVA exist is cos statistics would require equations to be solved a minimum of 30 times or more, and doing so would essentially be "chopping wood", repetitive tedium, to someone who knew how to do the equations. Same for a calculator. You know how to do the math, but doing 50 equations at a time gets tedious. They weren't meant to take the place of knowing how to do it for yourself - even if they technically can.

Spellcheck was invented to help catch typos, but now it's an excuse to not even bother to remember to spell. Contact lists were meant for remembering more contacts than was realistically feasible, but people still remembered important numbers - now people don't even know their own mother's phone number without looking it up.

AI aggregation was first used to analyze and summarize massive caches of data that were too big for a human to even begin to approach, and by the time it was solved, it would be out of date - but the humans still knew how to do the equations - they just had the bots do it at larger scale. Now people use AI to read a few articles that they can't be bothered to read for themselves.

It's one thing to use AI to do stuff nobody could possibly have done in the first place. It's something else to use it to do things you're more than capable of doing yourself, you just don't want to bother spending the time doing it, and want a quick easy solution - or to use it as a prop to make it look like you're better at something than you actually are.

The bot may have allowed you to skip directly from knowing to applying, but not cos you learned or mastered anything, the bot just did it for you. You learned an answer to a question, and you learned how to operate a tool, but you still don't actually understand what you're doing. And that matters.

In the instance of college, the point is to see if YOU can perform certain work, not whether a bot can. So yeah, a masters degree class might let you use ANOVA, but that's cos you're supposed to have already learned how to do stats by hand before that, and you probably had to pass a stats class to get into the class that lets you use ANOVA.

Even using AI to be "more efficient" in things you can already do, when you over-rely on the thing to do it for you, you gradually lose your ability to do it yourself. You get "rusty" and have to practice to get good at it again. Takes time to get it back. If you even go to the trouble to do it, rather than continuing to rely on the bot.

Continuing that theme, it's one thing to ask an AI to do a thousand calculations for you, cos it would take an unreasonable amount of time to do it, and you already know how to. It's something else to ask AI to do a calculation for you just cos you don't know how to do it, and don't feel like learning how, cos you don't do math much to begin with, and you have a calculator, so why learn how to do it the hard way anyways.

The more you rely on something to do something for you, even seemingly small things, the more you erode your own ability to do them for yourself. It's a bit like booze. Small amounts probably won't do noticeable damage, but it still does what it does, and overuse has consequences. This is as true for chatgpt as it is for anything else.



cyberdora
Veteran
Veteran

User avatar

Joined: 12 Jan 2025
Gender: Non-binary
Posts: 2,065
Location: Australia

Today, 7:09 pm

AI will often not be as effective as a human doing the same job. It won't always be more accurate.
https://unu.edu/article/ai-not-high-pre ... world-work
And it definitely won't always be fairer or more reliable. But it may still be used whenever it has an advantage over humans in one of four dimensions:

1. speed,
AI can do any job blazingly fast, a capability with important industrial applications. AI-based software is used to enhance satellite and remote sensing data, to compress video files, to make video games run better with cheaper hardware and less energy, to help robots make the right movements, and to model turbulence to help build better internal combustion engines.

2. scale,
AI can access tonnes of data and be in millions of places simultaneously (hive mind).

3. scope
AI already does more things than any one person could, even when a human might do better at any one of those tasks. Generative AI systems such as ChatGPT can engage in conversation on any topic, write an essay espousing any position, create poetry in any style and language, write computer code in any programming language, and more. These models may not be superior to skilled humans at any one of these things, but no single human could outperform top-tier generative models across them all. Hence why higher education institutions are already allowing students to use them

4. sophistication
AIs can consider more factors in their decisions than humans can, and this can endow them with superhuman performance on specialized tasks. Deep learning systems built from many-layered neural networks take account of complex interactions – often many billions – among many factors.

Anyway, like it or not, point 3 - both universities and workplaces permit "ethical use" of LLMs because LLMs have scope as tools and what is deemed "ethical" or "appropriate" will change over time based on a changing workplace landscape and improvement in technology.