• 0 Posts
  • 5 Comments
Joined 8 months ago
cake
Cake day: March 17th, 2024

help-circle


  • a .world or .sh.itjust.works - is too much for a handful of amateur admins to handle. Hand off the instance to a venture capital firm and you could see rapid enshitification.

    Lemmy is federalized. It is expected that many .worlders would just jump ship to another instance. And I don’t see how the venture capital firm could stop them… For as long as one organization doesn’t control 60%+ of all user’s instances we should be unshitifiable. It is possible for enshitification to happen… but it is of a greater difficulty, because the other non-shit instances still exist and they are federated, thus able to access the same content.

    They could try and pull up the drawbridge and de-federate from every other instance that isn’t under the control of the firm so that the content of the venture capital instances are exclusive, but for as long as they don’t control 60%+ of all user’s instances we are good.

    It is not to hard to imagine that, if .world where to be sold like that, half or more would jump ship. At least that’s what I hope.



  • I don’t think you understand exactly how theses machines work. The machine does not “learn”, it does not extract meaning from the tokens it receives. Here is one way to look at it

    Suppose you have a sequence of symbols: ¹§ŋ¹§ŋ¹§ŋ¹§ŋ And then were given a fragment of a sequence and asked to guess what you be the most likely symbol to follow it: ¹§ Think you could do it? I’m sure you would have no trouble solving this example. But could you make a machine that could reliably accomplish this task, regardless of the sequence of symbols and regardless of the fragment given? Let’s imagine you did manage to create such a marvellous machine.

    If given a large sequence of symbols spanning multiple books of length would you say this pattern recognition machine is able to create anything original? No… Because it is simply trying to copy it’s original sequence as closely as possible.

    Another question: Would this machine ever derive meaning from this symbols? No… How could it?

    But what if I told you that these symbols weren’t just symbols: Unbeknownst to the machine each one of this symbols actually represents a word. Behold: ChatGPT.

    This is basically the general idea behind generative AI as far as I’m aware. Please correct me if I’m wrong. This is obviously oversimplified.