• whileloop@lemmy.world
    link
    fedilink
    arrow-up
    86
    arrow-down
    3
    ·
    1 year ago

    This is a joke, right? This feels like a very dumb solution. I don’t know much about UTF-8 encoding, but it sounds like Roman characters can be encoded shorter than most or all others because of a shorthand that assumes Roman characters. In that case, why not take that functionality and let a UTF-8 block specify which language makes up most of the text so that you can have that savings almost every time? I don’t see why one would want it to be random.

    • alvvayson@lemmy.world
      link
      fedilink
      arrow-up
      123
      arrow-down
      1
      ·
      edit-2
      1 year ago

      It’s a joke.

      UTF-16 already exists, which doesn’t favor Roman characters as much, but UTF-8 is more popular because it is backword compatible with the legacy ASCII.

      UTF-32 also exists which has exactly equal length representation for every character.

      But the thing that equalizes languages is compression.

      Yes, a text written in Cyrillic with UTF-8 will take more space than a Roman language, easily double. However this extra space is much more easily compressed by an algorithm like GZIP.

      So after compression, the two compressed texts will then be similarly sized and much smaller than UTF-16 or UTF-32.

      • jmcs@discuss.tchncs.de
        link
        fedilink
        arrow-up
        18
        ·
        1 year ago

        Besides most text on the average computer is either within some configuration file (which tend to use latin script), or within some SGML derived format which has a bunch of latin characters in it. For network transmission most things will use HTML, XML or JSON and use English language property names even in countries that don’t speak English (see Yandex’s and Baidu’s APIs for example).

        No one is moving large amounts of .txt files around.

        • Buckshot@programming.dev
          link
          fedilink
          arrow-up
          24
          ·
          1 year ago

          You’ve never worked in finance then. All our systems at work do nothing but move large amounts of txt files around.

          That said, many of our clients still don’t support utf-8 so its all ascii and non-latin alphabets are screwed. They can’t even handle characters 128-255 so even stuff like £ is unsupported.

          • LaggyKar@programming.dev
            link
            fedilink
            arrow-up
            12
            ·
            1 year ago

            That said, many of our clients still don’t support utf-8 so its all ascii and non-latin alphabets are screwed.

            Ah, yes, I heard about that sort of thing. Some bank getting a GDPR complaint because they couldn’t correct the spelling of someone’s name, because their system uses EBCDIC.

            • fibojoly@sh.itjust.works
              link
              fedilink
              arrow-up
              9
              ·
              edit-2
              1 year ago

              Its not a joke. I worked for a big european bank network and the software there didn’t know how to translate from EBCDIC to UTF8 because none of the devs writing the software knew enough of the other side (mainframe vs PC) to realise this was an issue.

              Their solution was “if the file has a ? in it when we receive it, it’s probably a £”. Which of course completely breaks down the day you have any other untranslated character.

              I spent fucking weeks explaining this issue and why this was abominable, but apparently this wasn’t enough of an issue for people to fix it. Go figure…

    • S410@kbin.social
      link
      fedilink
      arrow-up
      56
      arrow-down
      16
      ·
      1 year ago

      It’ll be added when they’d find some free time!

      You see, adding pictures women with white cane facing right, limes and pregnant men is a very important and time consuming job! Standardizing encoding for some human language people use is just not as important!

          • LalSalaamComrade@lemmy.ml
            link
            fedilink
            arrow-up
            30
            ·
            edit-2
            1 year ago

            The KTSA under Pavanaja was trying to reform the language and modify it - it was a destructive reformation, where the language now borrows some feature from the Kannada and Malayalam script, and some of the characteristics were newly made and never seen before in the original script, whereas the other camp under Murthy was trying to preserve the original, archaic script. At last, both the groups have come to an agreement this year stating that they will allow the reformed script, as it is already ready and easier to grasp, and will be called the invented Tulu lipi. The ancient lipi will be called the Tulu-Tigalari lipi, and since there’s still some unconfirmed research work on a few characters, all they have to do now is focus on those characters and they can share the rest of them with the invented lipi.

            • v_krishna@lemmy.ml
              link
              fedilink
              English
              arrow-up
              6
              ·
              1 year ago

              Very interesting article and background. My father’s side of the family is all from Mysuru but also long roots in Udupi and Manipal. I’ll ask if anybody are Tulu speakers, I don’t think so as I’ve never heard of it.

              • LalSalaamComrade@lemmy.ml
                link
                fedilink
                arrow-up
                5
                ·
                1 year ago

                They are known, but there are multiple different forms. Some of the forms may have never been seen, and some of them cannot be expressed in the Unicode, as it was made with Latin letters in mind.

                So when you’re trying to digitize abiguda, you have to be careful about ligatures, because real world may have multiple different forms in different context, and you can get to choose only one. But when we are talking about archiving, it has to be perfectly copy-pasted the way it was in the palm inscription.

    • palordrolap@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      1 year ago

      Sure. OK. How about we put the Greek alphabet at the lower code points and the Latin alphabet higher up, and now you might argue that Latin takes up more space than necessary.

      Potential counterpoint: “This is stupid. Latin goes in the lower code points, it always has, it always will. Who’s putting Greek down there??”

      Well, if Greece had invented computing as well as, let’s say, democracy that’s very likely how things would be.

      In that timeline, someone is using exactly the same line on you “[The representation of Latin text in memory i]s as long as it needs to be unique.” and you’re annoyed because your short letter to Grandma is using far too much space on your hard drive.

      • TheHarpyEagle@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        1 year ago

        Genuine question, how many applications are bottlenecked by the size of text files? I understand your analogy, but even a doubling in size of all your utf-8 encoded files would likely be dwarfed by all the other binary data on your machine, right?

      • lowleveldata@programming.dev
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        Oh true. I’d be so annoyed because I somehow wrote a whole letter to Grandma in English which she couldn’t read.