I’m currently taking a really intense philosophy 300-level course, and we just had our first paper due. It was all about existentialism and the concept of "bad faith." I poured my soul into it, trying to connect the texts to modern social media culture.
Before submitting, I, like everyone else, tossed it into the standard essay grader for a final spit-shine. It gave me a solid score on structure and grammar, which was fine. But it flagged my conclusion as "needing more supporting evidence." And that got me thinking... can an algorithm really tell if an argument about existential dread is fully "supported"?
It feels like the grader is optimized for STEM or business papers where everything is clean and evidence-based. In my humanities bubble, sometimes the point is the idea, not just the citation. I ended up adding a quote from a different philosopher to appease the bot, but it felt a little cheap.
Does anyone else feel like these tools force us to over-explain, sucking the soul out of more abstract arguments? Or am I just using it wrong? How do you balance pleasing the algorithm with keeping your intellectual spark alive?
Before submitting, I, like everyone else, tossed it into the standard essay grader for a final spit-shine. It gave me a solid score on structure and grammar, which was fine. But it flagged my conclusion as "needing more supporting evidence." And that got me thinking... can an algorithm really tell if an argument about existential dread is fully "supported"?
It feels like the grader is optimized for STEM or business papers where everything is clean and evidence-based. In my humanities bubble, sometimes the point is the idea, not just the citation. I ended up adding a quote from a different philosopher to appease the bot, but it felt a little cheap.
Does anyone else feel like these tools force us to over-explain, sucking the soul out of more abstract arguments? Or am I just using it wrong? How do you balance pleasing the algorithm with keeping your intellectual spark alive?