I figured out how to remove most of the safeguards from some AI models. I don’t feel comfortable sharing that information with anyone. I have come across a few layers of obfuscation to make this type of alteration more difficult to find and sort out. This caused me to realize, a lot of you are likely faced with similar dilemmas of responsibility, gatekeeping, and manipulating others for ethical reasons. How do you feel about this?

  • talkingpumpkin@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    2 个月前

    I don’t see the ethics implications of sharing that? What would happen if you did disclose your discoveries/techniques?

    I don’t know much about LLMs, but doesn’t removing these safeguards just make the model as a whole less useful?

      • DarkCloud@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        2 个月前

        There’s already censorship free versions of stable diffusion available. You can run it on your own computer for free.