• 0 Posts
  • 46 Comments
Joined 8 months ago
cake
Cake day: November 10th, 2023

help-circle
  • What do you gain by doing this? I trust both proton and mullvad to not fuck up their encryption so attackers can’t read your traffic even through one VPN. The second one doesn’t offer additional security here.

    In your setup, proton will only know you use mullvad but not know which sites you visit in the end. Mullvad knows everything just the same as without proton. So the outer VPN doesnt add privacy either.

    If you are suspected of a crime, forcing mullvad to disclose your identity/IP is enough and proton doesn’t help.

    If you are worried about traffic correlation analysis, then yes 2 VPNs will help. But honestly for normal usage I don’t see the point of 2 VPNs.

    And about the DoS fear. Just do it the other way round? Mullvad on the router, proton on the device? From protons perspective you produce the same amount of traffic, it just comes from a mullvad server. The outer VPN is the one where you have increased traffic due to 2 VPNs. But I am pretty sure neither will be a problem and tunneling a VPN through a VPN is not a TOS violation


  • groet@feddit.detoComic Strips@lemmy.worldI'm in!
    link
    fedilink
    arrow-up
    11
    ·
    2 months ago

    You DONT want to turn it off. Digital forensics work WAAAAAAY better if you have a memory dump of the system. And all the memory is lost if you turn it off. Even if the virus ran 10h ago and the program has long stoped running, there will most likely still be traces in the RAM. Like a hard drive, simply deleting something in RAM doesn’t mean it is gone. As long as that specific area was not written over later it will still hold the same contenta. You can sometimes find memory that belonged to a virus days or even weeks after the infection if the system was never shut down. There is so much information in ram that is lost when the power is turned off.

    You want to 1: quarantine from network (don’t pull the cable at the system, but firewall it at the switch if possible) 2: take a full copy of the RAM 2.5: read out bitlocker keys if the drive is encrypted. 3: turn off and take a bitwise copy of the hard drive or just send the drive + memory dump to the forensics team. 4: get coffee











  • Wait so without the option it checks against the system trust store and with the option it does exactly the same (but may also includes an additional CA if that was passed as the argument)?

    This should be a cve. There is a security feature. It does not work as documented. That’s a vulnerability. That should get a cve.

    Wtf apple



  • Can you verify the software running on an instance is the same as the one in the source code repository? You can’t. Can you verify the instance isn’t running code to read passwords from your login requests even if the code is the original open source code? You can’t.

    That’s why (and for other reasons) you should never use a password for more than one site/service/instance.

    Lemmy admins (admins in the Lemmy application) probably can’t read your password. But everyone with admin rights on the server operating system can.



  • Corporations are doing a bad job at it as well. While Gouvernement standards tend to be slow and stagnant, the free market produces an incomprehensible sea of standards. Like with USB, HDMI, 3/4G signals, cat-X Internet cables. If a single global manufacturer decides to do things slightly different you get a new version of a standard that everybody has to be compatible with.




  • The thing that confused me when first learning about docker was, that everybody compares it to a virtual machine. It’s not. Containers dont virtualize anything. They take a (single) process from the host OS and separate that into its own environment. All system calls, memory access, file writes etc are still handled by the same os (same kernel). However the process is separated both on the file system and process level. It can’t see other processes outside of the container and it also doesn’t see the real filesystem. It sees a filesystem provided by the container. This also means it sees different file and user permissions. When you run a alpine Linux docker container on an Ubuntu system, the container only containes the (few) files for alpine but no Linux kernel no desktop environment. A process inside that container only sees the alpine files and not the Ubuntu files. It also means all containers see a filesystem independent of each other and can use libraries and dependencies of different versions (they are only files after all).

    For administration it makes running complex services easy. You define how to setup that service (what base Linux distro to use, what packages to install, what commands to run, and how to start the process). You can then be save to assume the setup of that service did not interfere with the setup of any other service. “Service 1 needs a certain system wide config changed? Service 2 needs that config in the default state? And both need a different version of the same library?” In containers you can have all at the same time because they each see a different version of the same config and library.

    And all this is provided by the kernel itself. All docker does is provide an “easy” way to create and manage containers but could could do all of that using chroot, runc and a few other.

    As a note, containers usually don’t come with systemd as they don’t need an init system. You would run the service directly inside the container and then use systemd outside the container to make sure the container is started/restarted, or just docker as it can already do that.

    I found a great article demystifying containers recently