Blog 2: Crafting Arguments
Published on:
AI Entertainment or Harm? How fake media is breaking our trust.
News Article:
How to spot an AI video? LOL, you can’t.’
Article’s Argument and Premise
This article by the Washington Post tackled the concept of identifying whether or not videos are AI generated or not. It was specifically centered around a video posted on TikTok of bunnies jumping on a trampoline; A video in which I believed to be real. I, like many TikTok users were fooled… Although videos like this may be essentially harmless, how can the spread of misinformation on more serious matters impact our trust in online content?
- P1: AI-generated videos are being allowed to be posted on social media
- P2: Many viewers are being tricked into believing this made up content is real
- P3: The public is questioning their trust in online videos
- C: Therefore, allowing AI content on social media platforms like TikTok is needs to be regulated as it is promoting misinformation and harming our trust in online content.
Rebutting and the Fallacy
- P2 challenge: Many people are not being decieved by AI videos. For example, my boyfriend is making fun of me for believing bunnies could hop on a trampoline like that.
- P3 challenge: There’s not much evidence we can use to show that AI videos are harming our trust in online content. The harm is subjective.
Fallacy: We can use the slippery slope stance: We are assuming that one viral AI video is leading people to believe EVERYTHING online is fake.
- Rebuttal:
- P4: Many people can tell when something is AI and find it funny.
- P5: AI can be harmless and entertaining
- R: Allowing AI generated videos online doesn’t harm the trust of the public in online content.
Alternative Argument
- C2: AI bunny content may be harmless and entertaining, but making AI content become the new norm is also normalizing the spread of misinformation.
Recommendation
I believe AI-generated videos should clearly state that they are AI-generated. Stuff like AI-generated bunnies should be allowed on platforms like TikTok because it isn’t influencing any harm, but harmful videos, like deepfakes of politicians, should be flagged.
Reflection
As a college student living in the current digitalized world, AI videos/content has become something I’m sort of afraid of. Before, I will admit that I used to think it was funny when my mom would see a clearly fake video on Facebook and believe it to be real. However, I am now her. I frequently catch myself falling for AI-generated videos and pictures, and that terrifies me. AI content is just so new, and we are essentially the test dummies. I’m sure it will be heavily regulated in the future but right now, we are making the “rules” as the issues come. Overall a very interesting article, but it also made me slow down a little and think about how I interact with these fake videos. I know we shouldn’t fight AI, but that doesn’t mean I can’t be afraid of it.
