The pill strip on top with a list of video topics
There’s one reason not to block this: all the way on the right of that list is a “new to you” feed button, which is pretty neat to try sometimes.
The pill strip on top with a list of video topics
There’s one reason not to block this: all the way on the right of that list is a “new to you” feed button, which is pretty neat to try sometimes.
I love this extension, although the install process can best be described as labyrinthian. It’s crazy how many videos have the altered metadata compared to how many people I’d expect to use the extension; I guess we’re all active on Youtube, likely to contribute, and may see similar videos.
Highly recommend!
This is every school surveillance software
I would expect “faster” to be a way
I feel like the real answer is and has been for a long time some sort of distributed moderation system. Any individual user can take moderation actions. These actions produce visible effects for themself, and to anyone who subscribes to their actions. Create bot users who auto-detect certain types of behavior (horrible stuff like cp or gore) and take actions against it. Auto-subscribe users to the moderation actions of the global bots and community leaders (mods/admins) and allow them to unsubscribe.
We’d probably still need some moderation actions to be absolute and global, though, like banning illegal content.
Some sort of “report as bot” --> required captcha pipeline would be useful
Ad hominem is when you attack the entity making a claim using something that’s not relevant to the claim itself. Pointing out that someone (general someone, not you) making a claim doesn’t have the right credentials to likely know enough about the subject, or doesn’t live in the area they’re talking about, or is an LLM, aren’t ad hominem, because those observations are relevant to the strength of their argument.
I think the fallacy you’re looking for could best be described as an appeal to authority fallacy? But honestly I’m not entirely sure either. Anyways I think we covered everything… thanks for the debate :)
Ah, now I feel bad for getting a bit snippy there. You were polite and earnest as well. Thanks for the convo 🫡
Ok, I get what you’re saying, but I really don’t know how to say this differently for the third time: that’s not what ad hominem means
Ok, but if you aren’t assuming it’s valid, there doesn’t need to be evidence of invalidity. If you’re demanding evidence of invalidity, you’re claiming it’s valid in the first place, which you said you aren’t doing. In short: there is no need to disprove something which was not proved in the first place. It was claimed without any evidence besides the LLM’s output, so it can be dismissed without any evidence. (For the record, I do think Google engages in monopolistic practices; I just disagree that the LLM’s claim that this is true, is a valid argument).
To me, all the mental gymnastics about AI outputs being just meaningless nonsense or mere copying of others is a cop-out answer.
How much do you know about how LLMs work? Their outputs aren’t nonsense or copying others directly; what they do is emulate the pattern of how we speak. This also results in them emulating the arguments that we make, and the opinions that we hold, etc., because we those are a part of what we say. But they aren’t reasoning. They don’t know they’re making an argument, and they frequently “make mistakes” in doing so. They will easily say something like… I don’t know, A=B, B=C, and D=E, so A=E, without realizing they’ve missed the critical step of C=D. It’s not a cop-out to say they’re unreliable; it’s reality.
You’re saying ad hominem isn’t valid as a counterargument, which means you think there’s an argument in the first place. But it’s not a counterargument at all, because the LLM’s claim is not an argument.
ETA: And it wouldn’t be ad hominem anyways, since the claim about the reliability of the entity making an argument isn’t unrelated to what’s being discussed. Ad hominem only applies when the insult isn’t valid and related to the argument.
You’re correct, but why are you trusting the output by default? Why ask us to debunk something that is well-known to be easy to lead to the answer you want, and that doesn’t factually understand what it’s saying?
What? No, the fact that it’s an LLM is pivotal to the reliability of the information. In fact, this isn’t even information per se, just the most likely responses to this question synthesized into one response. I don’t think you’ve fully internalized how LLMs work.
As usual, most people who have control of how technology is used on a broad scale are in positions of power suitable for exploitation. That is, the people I’m talking about are business owners and high-level executives (and the government) using technology to exploit workers. To be fair, that’s not always the dynamic-- “normal” people can exploit each other too, and businesses and the government can as well. But it is the most pressing issue imo, because of the power imbalance. See also rent comtrol algorithms, automated insurance claim denials, etc.
If people stopped using tech to exploit people they would stop feeling exploited by it
I mean, it could be spent on outreach to inform people that Harris is more than just “some woman”, for one.
Thanks for the list, I’ll be trying these!