• 5 Posts
  • 34 Comments
Joined 1 year ago
cake
Cake day: August 11th, 2023

help-circle
  • I don’t know why you are getting downvoted to hell. This is actually correct. They put the second connector on there for a reason. People including myself have done the maths on this before and it’s all above board. Only fringe cases involving power transients, out-of-spec cards, and obviously overclocking should actually make this a problem. Even then the 12VHPWR uses the same current density if not more than a daisy chained 8 pin setup.




  • Not all PSUs even have a second cable. Mine sure doesn’t.

    Technically it’s fine to use daisy chained connectors. People get into trouble though with badly built power supplies, extreme overclocking, or cards like the R9 295X2 that blatantly violate the specifications.

    Older PSUs sometimes have trouble with new GPUs. It generally happens because new cards have large power transients that the older spec didn’t take into account. Sometimes running a second line fixes this for one reason or another, but not always. 12VHPWR actually uses similar current per wire or per cross section area of wire as a daisy chained setup, if not a little more.


  • I am using a RX 6700 XT on one cable as well and it’s perfectly fine. If your PSU has a second cable you can run that to be sure, but if not like mine don’t worry about it. It’s only certain corner cases like extreme overclocking, or certain cards and PSUs that violate the specifications that actually cause issues. The Radeon R9 295X2 would be an example of this. 12VHPWR actually runs a similar amount of current per wire, with an even smaller connector, as a daisy chained 8 pin setup. You should not use third party splitters though if you want to be safe.





  • Depending on what battery protection modes are in play, many have smart charging or other features designed to prolong life. Also a fair few batteries come out with greater than design capacity from the factory. It’s called a design capacity and not an absolute capacity for a reason. A phone battery that left the factory at 110% could conceivably still be at or above 100%.

    Fyi it’s not overnight charging that’s the issue either, it’s charging to 100%. What one device consider 100% varies and devices will essentially lie to you about it. 4.2V is normally considered 100% full for Lithium Cobalt Oxide batteries yet some devices push higher than this while others skirt under to pad capacity and cycle life respectively. It’s about tradeoffs.





  • AMBA/AXI-bus in the case of the Pi. GPUs existed long before PCIe did lol.

    One some x86 systems the CPU and GPU aren’t connected with PCIe either. AMD has infinity fabric that they use for things like the Instinct MI300 and some of their other APUs

    Edit: Oh yeah also ARM isn’t just low power anymore. It’s used in data centers and super computers these days. Even if it was there is lots of stuff you can do with a low power node, including running file servers, DNS or Pi hole, web servers, torrent/usenet downloaders, image and music servers, etc. I have also seen them used to maintain cluster quorum after loss of one more powerful node. A two node cluster won’t have quorum if one fails, so adding a pi as a third node makes sense.


  • Not all of them. Have a look at a Raspberry Pi or Apple Silicon devices. In fact most ARM SoCs I am fairly sure don’t use PCIe for their iGPUs. This makes sense when you think about the unified memory architecture some of these devices use. Just in case you aren’t aware Proxmox does indeed run on a raspberry pi, and I am sure they will gain support for more ARM devices in the future. Though I believe an x86 device with unified memory could also have problems here.




  • 2-3 clicks? That’s hilarious!

    These are the steps it actually takes: https://3os.org/infrastructure/proxmox/gpu-passthrough/igpu-passthrough-to-vm/

    That’s the best case scenario where it actually works without significant issues, which I am told is rarely the case with iGPUs.

    In my case it was considerably more complicated as I have two GPUs from Nvidia (one used for host display outout), so I needed to block specific IDs rather than whole kernel modules.

    Plus you lose display access to the Proxmox server which is important if anything goes wrong. You can also only passthrough to one VM at a time. Compared to using LXC you can passthrough to almost unlimited containers, and still have display output for the host system. It almost never makes sense to use PCIe passthrough on an iGPU.

    The reason to do passthrough is for gaming on Windows VMs. Another reason is because Nvidia support on Proxmox is poor.

    This is a guide to do passthrough with LXC: https://blog.kye.dev/proxmox-gpu-passthrough

    It’s actually a bit less complicated for privileged LXC, as they are having to work around the restrictions of unprivileged LXC containers.






  • I already have a router from another house. Not helpful given it doesn’t have 5G. Also walmart? I am not an American lol.

    So what this would actually mean, is cancelling a 24 month contract, buying two devices, one a 5G modem, and another to run OpenWRT, for well over £300. Shipping the other device back to the ISP. All with no guarantee any of it will work, given my experience with buying cellular modems previously. This would take probably 1 week plus, and cause more disruption to my parents after having already moved house and one of them being in hospital. That’s not taking into account anything that goes wrong with using OpenWRT, which is any number of things given it’s unofficial firmware that I have no previous experience with.

    Yeah no that’s not going to happen. They aren’t going to go for that and honestly I don’t blame them that’s a horrible deal, even if I pay for half the equipment.