Sure If that happens. But it may also not. Which is actually usually the case. Sure, it’s not 100% safe, but it is safer.
Sure If that happens. But it may also not. Which is actually usually the case. Sure, it’s not 100% safe, but it is safer.
How can you be sure it’s one line of code? What if there are several codepaths, and venvs are activated in different places? And in any case, even if there is only one conditional needed, that is still one branch more than necessary to test.
Your symlink example does not make sense. There is someting that is changing. In fact, it may even be the opposite: if you need to use file A in s container, and file B otherwise, it may make perfect sense to symlink the correct file to C, so thst your code does not need to care about it.
Upgrading the base image does not imply updating your python, and even updating your python does not imply updating your python packages (except for the standard libraries, of course).
But then it’s easy to just check an environment variable and skip, if inside Docker.
How is forcing your script to be Docker-aware simpler than just always creating a venv?
It’s a bit unclear to me what you refer to with “their argument”. What argument exactly?
Is Yubico actually claiming it is more secure by not being open source?
there is no way to do the equivalent of banning armor piercing rounds with an LLM or making sure a gun is detectable by metal detectors - because as I said it is non-deterministic. You can’t inject programmatic controls.
Of course you can. Why would you not, just because it is non-deterministic? Non-determinism does not mean complete randomness and lack of control, that is a common misconception.
Again, obviously you can’t teach an LLM about morals, but you can reduce the likelyhood of producing immoral content in many ways. Of course it won’t be perfect, and of course it may limit the usefulness in some cases, but that is the case also today in many situations that don’t involve AI, e.g. some people complain they “can not talk about certain things without getting cancelled by overly eager SJWs”. Society already acts as a morality filter. Sometimes it works, sometimes it doesn’t. Free-speech maximslists exist, but are a minority.
Well, I, and most lawmakers in the world, disagree with you then. Those restrictions certainly make e.g killing humans harder (generally considered an immoral activity) while not affecting e.g. hunting (generally considered a moral activity).
So what possible morality can you build into the gun to prevent immoral use?
You can’t build morality into it, as I said. You can build functionality into it that makes immmoral use harder.
I can e.g.
Society considers e.g hunting a moral use of weapons, while killing people usually isn’t.
So banning ceramic, unmarked, silenced, full-automatic weapons firing armor-piercing bullets can certainly be an effective way of reducing the immoral use of a weapon.
While an LLM itself has no concept of morality, it’s certainly possible to at least partially inject/enforce some morality when working with them, just like any other tool. Why wouldn’t people expect that?
Consider guns: while they have no concept of morality, we still apply certain restrictions to them to make using them in an immoral way harder. Does it work perfectly? No. Should we abandon all rules and regulations because of that? Also no.
Yes, and what I’m saying is that it would be expensive compared to not having to do it.
Doing OCR in a very specific format, in a small specific area, using a set of only 9 characters, and having a list of all possible results, is not really the same problem at all.
How many billion times do you generally do that, and how is battery life after?
Cryptographically signed documents and Matrix?
At horrendous expense, yes. Using it for OCR makes little sense. And compared to just sending the text directly, even OCR is expensive.
The issue is not sending, it is receiving. With a fax you need to do some OCR to extract the text, which you then can feed into e.g an AI.
Obviously the 2nd LLM does not need to reveal the prompt. But you still need an exploit to make it both not recognize the prompt as being suspicious, AND not recognize the system prompt being on the output. Neither of those are trivial alone, in combination again an order of magnitude more difficult. And then the same exploit of course needs to actually trick the 1st LLM. That’s one pompt that needs to succeed in exploiting 3 different things.
LLM litetslly just means “large language model”. What is this supposed principles that underly these models that cause them to be susceptible to the same exploits?
Moving goalposts, you are the one who said even 1000x would not matter.
The second one does not run on the same principles, and the same exploits would not work against it, e g. it does not accept user commands, it uses different training data, maybe a different architecture even.
You need a prompt that not only exploits two completely different models, but exploits them both at the same time. Claiming that is a 2x increase in difficulty is absurd.
Oh please. If there is a new exploit now every 30 days or so, it would be every hundred years or so at 1000x.
Why?