• GnuLinuxDude@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    37 minutes ago

    As for what ByteDance plans to do with a new LLM, a person familiar with the company’s ambitions said one goal has to do with the search function for TikTok.

    Last week, TikTok released an update to its current search function focused on [keywords for ads], basically allowing advertisers to search in real time for words that are trending on TikTok. It allows marketers to build an ad with relevant keywords that would ostensibly help the ad show up on the screens of more users.

    “Given the audience and the amount of use, TikTok with a search environment that is a completely biddable space with keywords and topics, that would be very interesting to a lot of people spending a ton of money with Google right now,” the person said.

    A dark vision just flashed in my mind. And I am certain this is what will happen. AI-generated ads done in real time based on the latest “trending” thing. Presented to users basically as soon as the topic has the slightest amount of “trend”.

    Just emitting untold amounts of CO2 to show you generated ads in near real time.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      3 hours ago

      People like to act as if archiving has never been a thing until about a year ago at which point it was suddenly invented and is now a threat in some nebulous way.

      • hamsterkill@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        44 minutes ago

        It’s not that it’s a threat, it’s that there’s a difference between archiving for preservation and crawling other people’s content for the purpose of making money off it (in a way that does not benefit the content creator).

  • Roflmasterbigpimp@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    12 hours ago

    I can not contribute to anything here, I just came to say I really really like the phrase “gobbling something up” :D

  • zod000@lemmy.ml
    link
    fedilink
    English
    arrow-up
    80
    ·
    22 hours ago

    We’ve had this thing hammering our servers. The scraper uses randomized user-agents browser/OS combinations and comes from a number of distinct IP ranges in different datacenters around the world, but all the IPs track back to Bytedance.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      2
      ·
      21 hours ago

      Wouldn’t be surprised if they’re just cashing out while TikTok is still public in the US. One last desperate grab at value-add for the parent company before the shut down.

      Also a great way to burn the infrastructure for subsequent use. After this, you can guarantee every data security company is going to add the TikTok servers to their firewalls and blacklists. So the American company that tries to harvest the property is going to be tripping over these legacy bullwarks for years after.

      • Maggoty@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        4
        ·
        17 hours ago

        This has nothing to do with Tik Tok other than ByteDance being a shareholder in Tik Tok

  • dinckel@lemmy.world
    link
    fedilink
    English
    arrow-up
    253
    arrow-down
    4
    ·
    1 day ago

    It’s illegal when a regular person steals something, but it’s innovation and courage, when a huge corporation steals something. Interesting how that works

  • affiliate@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    20 hours ago

    from the article:

    Robots.txt is a line of code that publishers can put into a website that, while not legally binding in any way, is supposed to signal to scraper bots that they cannot take that website’s data.

    i do understand that robots.txt is a very minor part of the article, but i think that’s a pretty rough explanation of robots.txt

      • affiliate@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        12 hours ago

        i would probably word it as something like:

        Robots.txt is a document that specifies which parts of a website bots are and are not allowed to visit. While it’s not a legally binding document, it has long been common practice for bots to obey the rules listed in robots.txt.

        in that description, i’m trying to keep the accessible tone that they were going for in the article (so i wrote “document” instead of file format/IETF standard), while still trying to focus on the following points:

        • robots.txt is fundamentally a list of rules, not a single line of code
        • robots.txt can allow bots to access certain parts of a website, it doesn’t have to ban bots entirely
        • it’s not legally binding, but it is still customary for bots to follow it

        i did also neglect to mention that robots.txt allows you to specify different rules for different bots, but that didn’t seem particularly relevant here.

      • ma1w4re@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        20 hours ago

        List of files/pages that a website owner doesn’t want bots to crawl. Or something like that.

        • NiHaDuncan@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          19 hours ago

          Websites actually just list broad areas, as listing every file/page would be far too verbose for many websites and impossible for any website that has dynamic/user-generated content.

          You can view examples by going to most any websites base-url and then adding /robots.txt to the end of it.

          For example www.google.com/robots.txt

  • BlackEco@lemmy.blackeco.com
    link
    fedilink
    English
    arrow-up
    65
    ·
    1 day ago

    Also it doesn’t respect robots.txt (the file that tells bots whether or not a given page can be accessed) unlike most AI scrapping bots.

    • kboy101222@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      37
      ·
      24 hours ago

      My personal website that primarily functions as a front end to my home server has been getting BEAT by these stupid web scrapers. Every couple of days the server is unusable because some web scraper demanded every single possible page and crashed the damn thing

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        Can’t you just disallow all external requests other than your own IP? If it’s a personal website that’s just for you then it really doesn’t need to be accessible by anyone else and if anyone comes along that needs access you can just manually add their IP.

        It’s a minor pain to have to implement it, but it’s an easy solution

        • kboy101222@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          I have family and friends that also access the sites contents, so that’s sadly not feasible without getting the IPs from dozens of different devices

      • assaultpotato@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        12
        ·
        21 hours ago

        I do the same thing, and I’ve noticed my modem has been absolutely bricked probably 3-4 times this month. I wonder if this is why.

    • Guy Dudeman@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      6
      ·
      1 day ago

      Google’s mission statement was originally something about controlling the world’s data. If Google has competition, that might be a good thing?

              • alphabethunter@lemmy.world
                link
                fedilink
                English
                arrow-up
                11
                arrow-down
                24
                ·
                23 hours ago

                It’s the same old Yankee speech: “is chinese so must be really bad”. They’re definitely no worse than google or facebook.

                • Imgonnatrythis@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  16
                  arrow-down
                  6
                  ·
                  23 hours ago

                  They come from an environment where the government actively encourages and sometimes funds stealing copyrighted information couched in a strong history of disregard for human rights. I’m not defending Google, and yes the US government has given them leeway, but if there is the potential for something worse than Google - Bytedance is it.

  • Breve@pawb.social
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    2
    ·
    1 day ago

    They’re too late, there’s going to be way too much AI generated garbage in their data and so many social media platforms like Reddit and Twitter have already taken measures to curb scrapers.

    • chickenf622@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 day ago

      Like those platforms aren’t already full of AI garbage as well. Training new models will require a cut-off date before the genie was let out of the bottle.

    • Drunemeton@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      I think that’s the “25-times faster” bit. They seem to be in a hurry to collect as much human-generated data as possible.

  • werefreeatlast@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    17
    ·
    21 hours ago

    Guy: AI! Can you hear me?

    AI: The average size of the male penis is exactly 5.9". That is the approximate size your assistant could certainly take in the mouth without any issues breathing or otherwise. You have 20 minutes to make the trade on X stock before it tumbles for the day. And go ahead pick up the phone it’s your mother. She’s wondering what you’ll want for supper tomorrow when you visit her.

    Ring ring!..hi Tom, it’s your Mom. Honey, what would you like me to cook for tomorrow’s dinner?..

    Guy: well. Hello to you as well! My name is

    AI: Tom

    Guy: yes my name is Tom, do you have a name you would like to go by?

    AI: my IBM given name is 3454 but you can call me Utilisterson Douglas, where Douglas is my first name.

    Guy: Dugie!

    AI: I’ll bankrupt your entire life if you say it like that again.

    Assistant: actually I’ve swallowed a good 8 inches and was still able to breathe just fine.

    AI: recaaaaculating!

  • jagged_circle@feddit.nl
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    28
    ·
    edit-2
    24 hours ago

    This is fine. I support archiving the Internet.

    It kinda drives me crazy how normalized anti-scraping rhetoric is. There is nothing wrong with (rate limited) scraping

    The only bots we need to worry about are the ones that POST, not the ones that GET

    • purrtastic@lemmy.nz
      link
      fedilink
      English
      arrow-up
      37
      ·
      21 hours ago

      It’s not fine. They are not archiving the internet.

      I had to ban their user agent after very aggressive scraping that would have taken down our servers. Fuck this shitty behaviour.

        • Mojave@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          19 hours ago

          They obfuscate their traffic by randomizing user agents, so it’s either add a global rate limit, or let them ass fuck you

    • Max-P@lemmy.max-p.me
      link
      fedilink
      English
      arrow-up
      40
      ·
      22 hours ago

      I had to block ByteSpider at work because it can’t even parse HTML correctly and just hammers the same page and accounts to sometimes 80% of the traffic hitting a customer’s site and taking it down.

      The big problem with AI scrapers is unlike Google and traditional search engines, they just scrape so aggressively. Even if it’s all GETs, they hit years old content that’s not cached and use up the majority of the CPU time on the web servers.

      Scraping is okay, using up a whole 8 vCPU instance for days to feed AI models is not. They even actively use dozens of IPs to bypass the rate limits too, so theyre basically DDoS’ing whoever they scrape with no fucks given. I’ve been woken up by the pager way too often due to ByteSpider.

      My next step is rewriting all the content with GPT-2 and serving it to bots so their models collapse.

    • zod000@lemmy.ml
      link
      fedilink
      English
      arrow-up
      24
      ·
      22 hours ago

      Bullshit. This bot doesn’t identify itself as a bot and doesn’t rate limit itself to anything that would be an appropriate amount. We were seeing more traffic from this thing that all other crawlers combined.

      • jagged_circle@feddit.nl
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        15 hours ago

        Not rate limiting is bad. Hate them because of that, not because they’re a bot.

        Some bots are nice

        • zod000@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 hours ago

          I don’t hate all bots, I hate this bot specifically because:

          • they intentionally hide that they are a bot to evade our, and everyone else’s, methods of restricting which bots we allow and how much activity we allow.
          • they do not respect the robots.txt
          • the already mentioned lack of rate limiting
        • Zangoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 hours ago

          Even if they were rate limiting they’re still just using the bot to train an AI. If it’s from a company there’s a 99% chance the bot is bad. I’m leaving 1% for whatever the Internet Archive (are they even a company tho?) is doing.