Schlagwörter
Aktuelle Nachrichten
America
Aus Aller Welt
Breaking News
Canada
DE
Deutsch
Deutschsprechenden
Global News
Internationale Nachrichten aus aller Welt
Japan
Japan News
Kanada
Karte
Karten
Konflikt
Korea
Krieg in der Ukraine
Latest news
Map
Maps
Nachrichten
News
News Japan
Polen
Russischer Überfall auf die Ukraine seit 2022
Science
South Korea
Ukraine
Ukraine War Video Report
UkraineWarVideoReport
United Kingdom
United States
United States of America
US
USA
USA Politics
Vereinigte Königreich Großbritannien und Nordirland
Vereinigtes Königreich
Welt
Welt-Nachrichten
Weltnachrichten
Wissenschaft
World
World News

36 Kommentare
Nearly all jobs are now a days too so it’s good practise
Imo it’s going to go one of two ways.
Either AI gets good and not that good. It becomes like car ownership that people who drive go much further and faster than people who walk but human input is still critical.
For instant AI systems still confidently hallucinate and if that can’t be fixed for a couple of decades then they’ll always just be assistants where everything they do is checked.
In which case training young people to use them is good and they should just up the difficulty of the assignments so students are again at their limit.
Or AI is going to take all cognitive work, or huge chunks of it, in which case it doesn’t matter what you do at university.
Personally I favour a mix of written work and oral examinations on the content so you can check if the student can produce texts and also if they understand what’s in them and they need both to pass.
This isn’t concerning.
That people are still making assessments that can be done by LLMs is a problem though.
AI is here to stay and students are going to learn how to use it eventually, so they might as well get a grounding into how reliable it is, how to use it effectively and on how to spot the bullshit it spits out. It should also come with universities switching to more oral assessments and handwritten exams under closed book conditions. We need graduates who can think, analyse and discuss what they know more than we need graduates who can outsource everything to AI.
I think universities and educational institutions as a whole need to have a serious think about what their actual purpose is and what they are preparing students for. When I was in secondary education the focus always seemed to be on memorisation rather than understanding. University improved on that somewhat but memorisation was still definitely a factor since most exams were closed book or didn’t have comprehensive formula sheets etc. At the time I never felt this was preparing me for work since in the workplace memorisation is effectively not a factor and you are expected to draw on any resource available to you as needed. AI is now exacerbating this as employers are pushing hard for it to be used whilst the education system is generally against it, the gap between education and actual work skills seems to be widening. I came out of university with an Engineering degree and now work as an engineer, yet have never used anything taught to me in uni other than some more advanced excel skills. That isn’t to say that others from my class haven’t used more of what we learned but it makes university feel like a checkbox exercise rather than something that actually teaches you how to succeed in your field of study. It has to be tough being a student now and struggling with this disconnect between industry and education.
A lot of academia is wasted time, if I can answer the criteria the assignment is testing with bullet points and then have AI fluff it out to the arbitrary amount of words that some teacher has decided why not?
Hardly no one in the work world will ask you to „write 1000 words on such“
AI is currently in the disruptor phase, similar to when uber first started and it massively undercut existing taxi firms. It’s free now, but sometime in the future you will have to pay to access it, by then everyone will be so reliant that they’ll simply have to hand over the cash.
I do some teaching and marking, mainly post-grad but also undergrad from time to time. So no claims to be an expert, but it’s something I’ve been thinking about.
This has very quickly become a huge issue, and the education sector needs to decide what to do about it, which probably involves a fundamental shift.
There is plagiarism detection software (and has been for a while) but it’s not feasible to run every student essay through it, and it’s also not infallible.
In my case, the written essays say alongside practical work so I could easily see if someone was bullshitting due to the disparity between the essay and the projects. But if you’re just marking written work… it’s almost impossible now to be sure.
It would seem to me that the only real guarantor of original work is to shift more towards a viva system, or oral examination. If someone’s been copying stuff in from AI or Google, it will quickly become apparent if they can’t support or explain their thought process.
Of course that would be hugely time-consuming and expensive.
My sixth form teachers for Travel + Tourism advised us to use AI as they literally couldn’t be bothered to teach us. (Not sure if that’s the actual reason, but whenever I asked a question, they’d always tell me to use Gemini / ChatGPT) which is bizarre.
Recently did an undergrad degree, where instead of the simple attempt of saying not to use it and hoping we listened. Or making closed book, unrealistic memory recall exams
The assessments were based on critical thinking, scientific evaluation on the literature, and actually forming your own thoughts around a complex matter, with justification . And were actually told it was fine for research etc. just warned that it’s not very good at scientific data (missed nuanced interpretations, and often just gets things wrong). Also that it’s obvious when an essay is straight up generated, and that this type of surface level description wouldn’t do that well.
Or were presentation based, with questions being a big component.
This approach was something I’m genuinely impressed, and imo the right way to counter it
So my mind here immediately goes to, how much of a final degree assessment is written assignments or coursework? Obviously it’ll vary by subject (and university too, I would imagine).
When I was at uni, the degree was awarded on the basis of three things that were assessed over the second and third year: coursework, EOY exams and ongoing assessment by the lecturers based on verbal contributions in seminars. Coursework only counted for 20% of the final result. So even if I’d had AI to use on the written assignments (which I didn’t, I was at uni in the 90s), it would only have made a partial contribution to the degree as a whole.
I do think the universities will need to react to this and it’ll be interesting to see what they do. I know a lot of them are currently using turnitin or similar which puts the onus on students to run their work through a plagiarism checker before it can be uploaded for submission, but this isn’t a perfect system.
I haven’t worked out where I sit on this yet. At one level good AI literacy is an increasingly core skill and in the future those who can’t use it properly are going to be the future equivalent of those people from my parents‘ generation who won’t use internet banking or a mobile phone.
But on another level some people are clearly using AI as a kind of cheat mode, and using it this way does defeat the object of higher education in many ways. So maybe the universities will have to downweight the way coursework is assessed, and upweight final exams and verbal or practical assessments in which AI is impractical, if not impossible, to use.
Surely this is like nearly all undergrads use Google search. It’s a new tool that makes you faster.
In my computer science degree, ethics was a mandatory module all three years. If you’re going to be designing software that can kill people, software that can track people, or software that handles the most sensitive data people have – it’s been (rightly) deemed that ethics needs to be instilled into you as a mandatory module for three whole years.
I think a similar thing, perhaps even rolled into ethics, is going to come into practice with AI. The way using AI reshapes your brain, changes your thinking and critical thinking and the way it doesn’t actually know if it’s right or wrong all need to be taught, understood, and hammered home year after year.
shouldn’t have a repeat on calculators, even if hypothetically AI tools never improve from today onwards they’ll still be used loads in jobs and that isn’t going away.
whilst there is value to being able to perform multiplication and division without a calculator, it isn’t worth spending a degree focusing on it, the courses should adapt. as long as the output isn’t plagiarised or bought, two things not sustainable in the business world, then the course should measure aptitude assuming the person is using AI tools, just as a take home maths assessment assumes you’ll use a proof checker and other widely used tools.
there’s still a range of skill, people who don’t understand the source material can only get so far with prompt engineering and will let mistakes through, revealing their lack of aptitude, which the course should catch and mark them down for. the best students should still come out on top. the useless ones not willing to put in the work should fail. that can be achieved without trying and failing to police AI use.
and ofc, on occasion an in person assessment to make sure they can perform at a satisfactory level without those tools.
Goodbye coursework, welcome back monitored tests and grades coming down
What’s the point of going to uni at this point?
If your response is “well work is basically just using AI at this point” then that makes the point even more stupid
Relatively simple solution, in person exams – always were a relatively better way of testing imo anyway
„My student brings me their essay, which has been written by AI, & I plug it into my grading AI, & we are free!“
– Žižek
Some people don’t know how bad it is as a graduate who worked hard throughout all of university just to be lumped in with people like this. I commuted up to two hours to university 3-5 days a week and would spend my time not in class in the library studying. I went to a good university, graduated with good grades, but the assumption will be that I just used AI to cheat my way through.
I mean what did you expect? Them to NOT use AI and go into the workplace with a disadvantage?
Pub quizzes have adapted more quickly to changes in technology than universities have
I’m so glad I finished literally the year before this became a legitimate concern.
I don’t think the article mentioned it, but are the students allowed to use AI in their assignments?
I’m studying with the OU, and for my current module (it varies module to module) we can use AI for assignments but have to fill out a form declaring that we used it, what we used it for, why we used it, the AI model, the prompt we used, and an over view of the results.
We also have to keep a full log of the interaction in case our tutor asks to see it.
I haven’t used it voluntarily yet, but had to to use it for a question on my last assignment. I personally didn’t like it, and I don’t think it particularly sped up my work, as I had to cross check everything AI produced and correct all the references etc.
However, AI use is the way businesses are going (for the time at least) so I think it might be time for Unis to cover how to use AI academically to give students the tools they need to be effective, and to ingrain good habits, like checking references are correct and it hasn’t hallucinated
Ai is really useful. I use it to have a conversation with myself and check if my arguments are organized in a way that makes sense. Also use it as a dictionary and to check if my sentences are grammatically correct for example. However it’s obvious that it can’t be trusted with facts and I always have to correct it, to which it replies:“you’re right, I was wrong“ but then keeps giving a wrong answer anyways.
My son is studying economics. When he went to uni we really emphasised the need to „play the game“ and engage, go to every class and seminar that he can etc. He does, but he’s in a minority. Lots skip lessons all the time and make up for it by using AI.
He says he gets lots of extra help and advice because they know him and he can ask questions. He might use AI to summarize points to help him build an argument. But he’s stubborn about doing his work himself because he recognizes what he actually gets from doing it. And he’s on track for a 1st.
We often talk about what „the point is“ of certain things- teacher contact, struggling with certain issues, spending time doing certain things- and he gets it. He says plenty of people are just there for the certificate though. His view is- how is that going to help you be good at a job?
Whenever people say they use AI I immediately disregard most of what they have to say and think they’re a fucking idiot. There has never been a single task in my working or home life that I’ve needed to outsource that couldn’t also be solved by a solid 5 minutes of thinking and study/journal reading.
I do worry that young people reliant on AI won’t develop the skills needed to think up logical solutions quickly.
There’s nothing wrong with using AI to help you research a topic, _as long as you use actual academic investigation techniques to verify the sources_. Kids forget that step.
The issue lies in the trade off between doing a degree for fun/personal enrichment vs for getting a job. the entire existence of the latter means that geating a good mark matters. This means if using AI gets you a better mark the choose is easy
And in the real world, all employees are using AI.
The problem is the teaching and examination system is broken and not evolving
Its fair enough. AI does the assignment, then AI gets the job!
I’m not gonna sift through stackoverflow manually when there’s a chatbot that can most of the time just phrase the answer better so I can keep going
Paying 10k a year to learn absolutely nothing sounds like a great use of time and money.
Must be depressing for lecturers giving feedback on work their students didn’t even write themselves. The students are just undermining their own learning.
I’ve used Ai in submissions. Not to write entire pages but mostly to refine what I said (Grammarly) or in some cases help me understand certain aspects of what I needed to write about.
Ai can be helpful in a number of applications but can be abused very quickly.
If I’m honest Vivas need to be thing for undergrads for precisely this reason
I was once worried about job security when next gen comes along with new ideas but the brain drain might mean current working generation might be safe to cover for sudden drop in skills that’s coming.
If you can’t do the basics you cannot develop skills into more complex stuff ai can never do
I’m glad I’m not at university in these times and that my job is very unlikely to be taken over by AI.