> and they still don't think to simply just tell GPT what lens they want it to view content in, or give it accurate and necessary directives. They think it'll handle that for them. Why?
Projecting much? That doesn't describe me _at all_. I use ChatGPT and other LLM's frequently. I just legitimately don't think it would be good at the task of grading a paper with any sort of consistency. I don't even see why you would _need_ your paper graded for you to know it has flaws.
Projecting much? That doesn't describe me _at all_. I use ChatGPT and other LLM's frequently. I just legitimately don't think it would be good at the task of grading a paper with any sort of consistency. I don't even see why you would _need_ your paper graded for you to know it has flaws.