There’s no way for teachers to figure out if students are using ChatGPT to cheat, OpenAI says in new back-to-school guide::AI detectors used by educators to detect use of ChatGPT don’t work, says OpenAI.

  • PurpleTentacle@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    73
    arrow-down
    3
    ·
    10 months ago

    My wife teaches at a university. The title is partly bullshit:

    For most teachers it couldn’t be more obvious who used ChatGPT in an assignment and who didn’t.

    The problem, in most instances, isn’t the “figuring out” part, but the “reasonably proving” part.

    And that’s the most frustrating part: you know an assignment was AI-written, there are no tools to prove it and the university gives its staff virtually no guidance or assistance on the subject matter, so you’re almost powerless.

      • lobut@lemmy.ca
        link
        fedilink
        English
        arrow-up
        22
        arrow-down
        1
        ·
        10 months ago

        I agree with you for sure. However if I’m playing devil’s advocate … I think some people will fall under the pressure and perform poorly just because it’s oral rather than written.

        I generally think that even if that’s the case that it’s an important skill to teach too, but I’m just thinking of contradictions.

        • Iteria@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          14
          ·
          10 months ago

          Oral would suck for the transition students. It’s a completely different style and skill set of answering questions and no kid would have training or the mental framework on how to do it. It’s great if you’re the kind of person who can write a mostly perfect draft essay from start to finish no skipping around or back tracking, but if that’s not you, it’s gonna be a rough learning curve. This is before we ask questions like how does a deaf person take this exam? A mute person? Someone with verbal paraphasia?

        • grabyourmotherskeys@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 months ago

          You are not wrong. I think the best use of this would be a verification test that had significant impact on your grade but didn’t necessarily fail you if you did well in other evaluations.

          Think of it as a conversation like a job interview that takes into account the different ways people react in that environment. I do this when I’m interviewing job candidates. I interview people for technical jobs. I value good communicators but if that’s the only people I hired, I wouldn’t have as good a team. But if I do hire someone who isn’t as good as this, I coach them. They get more comfortable. I realize some people have anxiety or other things that make this very difficult, I think that could be taken into account (e.g. more written work but in an observed setting).

      • Brainsploosh@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        10 months ago

        Biggest reason for written exams is bulk processing.

        There are many better ways to show competency, ask any engineering or medical school, but few as cheap.

      • inspxtr@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        10 months ago

        To add on to the detection issues, international students, students on the spectrum, students with learning disability, … can all be subject to being flagged as “AI generated” by AI detectors. Teachers/professors who have gut feelings should (1) re-consider what biases they have in expected writing styles, and (2), like u/mind says, check in with the students.

      • Gargantu8@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        10 months ago

        I’ve had great success in pasting my writing in to help it write in my “voice”

    • uriel238@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      10 months ago

      My coven-mate was called in by her college dean, accusing her of faking or plagiarizing her mid-term thesis. (I totally forget what the subject was. This was late 1980s. She wanted to work in national intelligence.)

      But the thing is, she could expain every part of her rabbit-hole deep dive (which was a trip to several libraries and locating books themselves rather than tracking leads through the internet.) It was all fresh in her head, and to the shock and awe of her dean and instructor (delight? horror?) it was clear she was just a determined genius doing post-grad quality work because she pushed herself that hard. And yes, she was out of their league and could probably write the thesis again if that was necessary.

      In our fucked up society, the US has little respect for teachers or even education so I don’t expect anything real to happen, but this would be grounds to reduce classroom size by increasing faculty size so that each teacher is familiar with their fifteen students, their capabilities and ambitions and challenges at home. That way when a kid turns in an AI essay but then can’t expain what the essay says, the teacher can use it as a teachable moment: point out that AI is a springboard, a place to start as a foundation for a report, but it’s still important for the student to make it their own, and make sure it comes to conclusions they agree with.

    • jmp242@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      10 months ago

      I would like to know how you know who’s using ChatGPT though. A gut feeling doesn’t work for many good reasons.

      • Thorny_Thicket@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        6
        ·
        10 months ago

        ChatGPT writes in very distinct style and it’s quite easy to tell by anyone who has played around with it. The issue here isn’t necessarily being able to tell whose cheating but proving it is the hard part.

    • Thorny_Thicket@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 months ago

      Yeah, I use ChatGPT to assist with the grammar in my posts here at times. However, I need to explicitly instruct it to only correct the errors and not make any other changes. Otherwise, it completely rewrites the entire message, and the language ends up sounding unmistakably like ChatGPT. As you mentioned, it’s immediately apparent because it has a distinct style, and no typical human writes in that manner. Like you said, it’s easy to discern but challenging to confirm. Additionally, with the right prompt, you can probably get it to generate text that sounds more conventional.

    • MinekPo1 [She/Her]@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Something that can come up is weird notation in math.

      As an example photomath, which is an automatic math problem solver, uses a different interval notation (ie x ≥ 2 is solved for all x ∈ [ 2, ∞ ⟩ ), than the one used in my locale (ie x ≥ 2 is solved for all x ∈ [ 2, ∞ ) or for all x ∈ ⟨ 2, ∞ )) which does trick some people up.

      This is more relevant at highschool level than academic level I’m guessing though.

      extra note: chat GPT gets the right notation (in a sample rate of n=1).