• 0 Posts
  • 96 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle

  • I’m not super familiar with MacOS, but do you know if Gatekeeper or XProtect run at ring 0?

    Gatekeeper does mainly signature checking. XProtect does signature checking on an applications first launch. Both of those things would be pretty stupid to implement in ring 0, so I’m pretty sure they are not.

    If they do run at ring 0, would you consider that anticompetitive?

    No, as they’re not doing any active monitoring. They’re pretty much the “you downloaded this file from the internet, do you really want to run it?” of MacOS.

    I’m almost certain Apple will move or did move to depreciate kernel extensions. Which means it would be the same situation Microsoft wanted to force as you described.

    That is indeed the case, but I’m not aware of any Apple products relying on being a kernel extension. Apple is facing action from the EU for locking down devices from device owners, though - mainly applying to phones/tablets. On Macs you can turn pretty much everything off and do whatever you want.

    The other argument with Defender is you could at least have a choice to use it or not.

    Without providing a proper API Defender (both the free one, and the paid one offering more features) would be able to provide more features than 3rd parties. Microsoft also wouldn’t have an incentive to fix the APIs, as bugs don’t impact them.

    The correct way forward here is introducing an API, and moving Defender to it as well - and recent comments from Microsoft point in that direction. If they don’t they’ll probably be forced by the EU in the long run - back then it was just a decision on fair competition, without looking at the technical details: Typically those rulings are just “look, you need to give everybody the same access you have, but we’ll leave it up to you how to do it”. Now we have a lot of damage, so now another department will get active and say “you’ve proven that you can’t make the correct technical decision, so we’ll make it for you”.

    A recent precedent for that would be the USB-C charger cable mandate - originally this was “guys, agree on something, we don’t care what”, which mostly worked - we first had pretty much everything micro USB, and then everything USB-C. But as Apple refused the EU went “look, you had a decade to sort it out, so now we’re just telling you that you have to use USB-C”



  • One thing I find very amusing about this is that AMD used to have a reputation for pulling too much power and running hot for years (before zen and bulldozer, when they had otherwise competetive CPUs). And now intel has been struggling with this for years - while AMD increases performance and power efficiency with each generation.



  • Unless you are gunning for a job in infrastructure you don’t need to go into kubernetes or terraform or anything like that,

    Even then knowing when not to use k8s or similar things is often more valuable than having deep knowledge of those - a lot of stuff where I see k8s or similar stuff used doesn’t have the uptime requirements to warrant the complexity. If I have something that just should be up during working hours, and have reliable monitoring plus the ability to re-deploy it via ansible within 10 minutes if it goes poof maybe putting a few additional layers that can blow up in between isn’t the best idea.





  • Admittedly I’m just toying around for entertainment purposes - but I didn’t really have any problems of getting anything I wanted to try out with rocm support. Bigger annoyance was different projects targetting specific distributions or specific software versions (mostly ancient python), but as I’m doing everything in containers anyway that also was manageable.


  • For AI and compute… They’re far behind. CUDA just wins. I hope a joint standard will be coming up soon, but until then Nvidia wins

    I got a W6800 recently. I know a nvidia model of the same generation would be faster for AI - but that thing is fast enough to run stable diffusion variants with high resolution pictures locally without getting too annoyed.


  • It has been a while since I touched ssmtp, so take what I’m saying with a grain of salt.

    Problem with ssmtp and related when I was testing it was its behaviour in error conditions - due to a lack of any kind of spool it doesn’t fail very gracefully, and if the sending software doesn’t expect it and implement a spool itself (which it typically doesn’t have a reason to, as pretty much the only situation where something like sendmail would fail is a situation where it also wouldn’t be able to write a spool) this can very easily lead to loss of mails.

    I already had a working SMTP client capable of fishing mails out of a Maildir at that point, so I ended up just doing a simple sendmail program throwing whatever it receives into a Maildir, and a cronjob to send this forward. This might be the most minimalistic setup for reliably sending out mail (and I’m using it an all my computers behind Emacs to do so) - but it is badly documented, so if you don’t care about reliability postfix might be a better choice, or if you don’t just go with ssmtp or similar. Or if you do want to dig into that message me, and I’ll help making things more user friendly.



  • It surely is a bubble - so probably a bit different than many other bubbles.

    I think OpenAI made the right call (for them) to commercialize when they did - as that pretty much was their only chance to do so. Things has moved fast over the last 1.5 years - and what used to take a decade in tech has happened within months: OpenAI is the dinosaur company grandfathered in, while for already about a year it’s been more sensible for anybody wanting to do something with LLM to selfhost (or buy hosting capacity, but put up own data) one of the more open language models, and possibly adjust or re-train it.

    As a company owner I get a ridiculous amount of spam for a year already from all kinds of companies building products on top of OpenAI stack, or are trying to sell training or conferences. All those companies will be left with nothing once all the slower users realize technology has moved on. It’s like somebody trying to build all their product offerings based on VMWare stack nowadays.

    If you as a company want to offer something around AI right now the safest option is probably offering hosting, or if you want to do more hands on, adjustment of open models. Both of those are very risky, and many will go bust in years to come - but not as suicidal as building on top of a closed dinosaur.




  • They used to link to my dig wrapper on my homepage for having their clients debug DNS problems for many years - even with translations of my UI in the various language help sites. I always found it amusing that a hoster of their size does that, instead of spending a lunchbreak to throw something together that integrates with their help page.

    There also was a non significant number of users which didn’t understand that my homepage had nothing to do with OVH, and ended up mailing me about their DNS problems.