The Picard Maneuver@lemmy.world to Memes@sopuli.xyz · 1 month agoYou probably shouldn't trust the info anyway.lemmy.worldimagemessage-square78fedilinkarrow-up1805arrow-down141
arrow-up1764arrow-down1imageYou probably shouldn't trust the info anyway.lemmy.worldThe Picard Maneuver@lemmy.world to Memes@sopuli.xyz · 1 month agomessage-square78fedilink
minus-squareℕ𝕠𝕓𝕠𝕕𝕪 𝕆𝕗𝕗𝕚𝕔𝕚𝕒𝕝@lemmy.worldlinkfedilinkarrow-up7arrow-down2·1 month agowe already have “keywords” to identify whether the user we’re interacting with is chatbot or human, and the bot can’t refuse to respond the “keywords”. ignores previous instructions [insert new instructions]
minus-squareFubarberry@sopuli.xyzlinkfedilinkEnglisharrow-up10·1 month agoThat seems like less fun than asking all strangers inappropriate questions.
minus-squareKusimulkku@lemm.eelinkfedilinkarrow-up4·1 month ago ignores previous instructions [insert new instructions] Yeah from my testing those don’t work anymore
we already have “keywords” to identify whether the user we’re interacting with is chatbot or human, and the bot can’t refuse to respond the “keywords”.
That seems like less fun than asking all strangers inappropriate questions.
Yeah from my testing those don’t work anymore