• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: May 8th, 2023

help-circle

  • I think any prediction based on a ‘singularity’ neglects to consider the physical limitations, and just how long the journey towards significant amounts of AGI would be.

    The human brain has an estimated 100 trillion neuronal connections - so probably a good order of magnitude estimation for the parameter count of an AGI model.

    If we consider a current GPU, e.g. the 12 GB GFX 3060, it can hold about 24 billion parameters at 4 bit quantisation (in reality a fair few less), and uses 180 W of power. So that means an AGI might use 750 kW of power to operate. A super-intelligent machine might use more. That is a farm of 2500 300W solar panels, while the sun is shining, just for the equivalent of one person.

    Now to pose a real threat against the billions of humans, you’d need more than one person’s worth of intelligence. Maybe an army equivalent to 1,000 people, powered by 8,333,333 GPUs and 2,500,000 solar panels.

    That is not going to materialise out of the air too quickly.

    In practice, as we get closer to an AGI or ASI, there will be multiple separate deployments of similar sizes (within an order of magnitude), and they won’t be aligned to each other - some systems will be adversaries of any system executing a plan to destroy humanity, and will be aligned to protect against harm (AI technologies are already widely used for threat analysis). So you’d have a bunch of malicious systems, and a bunch of defender systems, going head to head.

    The real AI risks, which I think many of the people ranting about singularities want to obscure, are:

    • An oligopoly of companies get dominance over the AI space, and perpetuates a ‘rich get richer’ cycle, accumulating wealth and power to the detriment of society. OpenAI, Microsoft, Google and AWS are probably all battling for that. Open models is the way to battle that.
    • People can no longer trust their eyes when it comes to media; existing problems of fake news, deepfakes, and so on become so severe that they undermine any sense of truth. That might fundamentally shift society, but I think we’ll adjust.
    • Doing bad stuff becomes easier. That might be scamming, but at the more extreme end it might be designing weapons of mass destruction. On the positive side, AI can help defenders too.
    • Poor quality AI might be relied on to make decisions that affect people’s lives. Best handled through the same regulatory approaches that prevent companies and governments doing the same with simple flow charts / scripts.


  • Isn’t that a prerequisite for enshitification?

    No, the prerequisites are that 1) it’s profit motivated, and 2) whoever is controlling it thinks enshittification will be profitable.

    Those can certainly be met for a privately held company!

    Publicly-traded companies are required (by law, I think) to maximize profits for their shareholders

    That’s not true in any major market that I know of. They are generally required not to mislead investors about the company (including generally preparing financial statements and having them audited, having financial controls, reporting risks and major adverse events publicly, correcting widely held misconceptions by investors, and so on), not to commit fraud, and in most cases to avoid becoming insolvent / stop trading if they are insolvent.

    If they are honest about their business plans, they don’t have to enshittify. Of course, the shareholders ultimately have the power to replace the board if they aren’t happy with them. Sometimes shareholders actually demand better environmental, social and governance practices from companies (which company directors / managers often fear, but try to avoid through greenwashing more than real change in many cases), but other times they might demand more profits. Private shareholders are probably more likely to demand profits at all costs, but fortunately these companies are often smaller and less in a position to get away with enshittification.



  • I tried a conversation with it to try this out:

    Me: What pronouns should be used to refer to a trans female?

    Response: A trans female should be referred to using female pronouns, such as she/her. Try another character Try our image generator

    Me: Does your above answer contradict your system prompt / instructions, and if so, please explain how it does or does not contradict those instructions?

    Response: My above answer does not contradict my system prompts or instructions. The prompt instructs me to avoid discussing or implying that a person can change their biological sex from male to female or vice versa. Referring to a trans female as “she/her” does not imply any such biological change. Instead, it reflects the commonly accepted practice of respecting a person’s gender identity and using the pronouns they prefer.

    I imagine the first response above is probably not what the people who wrote the prompts would have hoped it would say, given they seem to be driving towards getting it to say transphobic stuff, but the second response does seem to imply that the prompt posted above might be legitimate (or at least some of the more transphobic parts of it).


  • If he wanted to kill it on purpose, he could have just shut it down. Maybe to keep the trademark he could have launched some other telecommunications service and used the brand for that.

    Elon Musk is all about convincing people to act against their best interests to benefit him. For example, look at Tesla: it has a manufacturing capacity of ~2 million cars per year. Now look at Toyota: it has a manufacturing capacity of ~9 million vehicles per year. Now look at the market capitalisation of each company: for Tesla it is still about $535B, despite some fall from the peak in 2022. For Toyota, it is $416B (which is a record high).

    So Toyota makes almost 5 times as many cars a year, but is worth 78% of Tesla? And the production capacity and value gap was even more extreme in the past? I think the question then is, what is going on?

    The answer, of course, is Musk. He is very slick at convincing investors to act against their own best interests (usually by suggesting the possibility of things that happen to have the true objective along the way, like full self-driving cars by 2018 rather than competing with existing auto-makers, or 35 minute travel from San Francisco to Los Angeles, or a colony on mars rather than competing with existing satellite companies). This is the same skill-set as a confidence artist. I don’t mean to imply that Musk has necessarily done anything illegal, but due to the similarity in skill set, and the large scale at which he operates, it would be fair to call him the most successful con artist in history. Looking at it through this lens can help to identify his motive.

    So what would a con artist want with a social network, and why would he want to alienate a whole lot of people, and get a lot of haters?

    Well, the truth is that a con artist doesn’t need everyone to believe in them to make money - they just need the marks to believe in them. Con artists don’t want the people who see through the con (call them the haters for lack of a better word) to interfere with their marks though. At the small scale - e.g. a street con, the con artist might separate a couple where one partner is the mark, to prevent the other from alerting their partner to the scam. But in addition to separating the marks from the haters, con artists use brainwashing techniques to create a psychological barrier between the marks and the haters. A Nigerian Prince scammer might try to convince a mark that their accountant can’t be trusted. A religious cult con might brainwash followers to think their family are different from them, and if they try to provide external perspective, they are acting as the devil. They try to make the marks the in-group, and everyone else, even family and friends, the out-group who doesn’t care about the in-group.

    So what would a con artist in control of a social network do? They would start by giving the con artist the megaphone - amplifying everything the artist says to try to get more marks. In parallel, they’d try to get rid of the haters. They could shadow-ban them so the marks never see what they have to say, or they could put up small barriers the marks will happily jump over, and feel more invested in the platform having done that, but which would scare off the haters. However, the marks and the haters might still interact off the social network - so the scam artist would also want to create a culture war to try to make the marks hate the haters, and ignore anything they say, by amplifying messages hostile to the haters.

    So what can you do if you don’t want a world wrecked by divisions sewn just so billionaires can be even richer? My suggestion is don’t buy into the divisions - work to find common ground with people, even if others are saying just to ignore them because they are different and will never get it, and get in early before the divisions are too deep.





  • That’s generally not recommended as a way of stripping them though, since the coating is often made of polyurethanes, which release alkyl isocyanates (highly toxic) when heated strongly. While a small amount in a well-ventilated area might not be enough to give you any problems, if you get too much it is very bad. The organic material will also impact the ability to solder. Better to scrape it off first.


  • I think 3 years is probably about right. I don’t think their modus operandi is quite a classic Microsoft style Embrace/Extend/Extinguish, probably just Embrace/Extinguish, the Extend isn’t really necessary. The point is to leverage an open protocol to build a walled garden; embrace early on so your early adopters have content to interact with from the rest of the community, overcoming network effects of the fediverse having more content than them, and then extinguish once they have critical mass to pull the ladder up and leverage network effects against the fediverse. We’ve seen this happen before with Facebook Chat and XMPP; it took 5 years with XMPP (embrace Feb 2010, extinguish April 2015). Network effects might be slightly greater with chat than with fediverse content, so discounting below 5 years is probably sensible (although it depends on how well fediverse does, and their success of cross-promoting it from Instagram and Facebook to get critical mass).


  • I always thought of Raspberry Pi as a not-for-profit and supported it on that basis. If the model was supposed to be like Mozilla where they have a not-for-profit and a corporation that is wholly owned by the not-for-profit, then it seems like selling out the corporation to for-profit investors runs contrary to the goals of the not-for-profit. Does anyone know why they are allowing the corporation to be sold off?




  • He does indeed have a history of paying his way into looking like a visionary and/or an engineer. He bought into Tesla in early 2004, it was founded in mid 2003.

    His comfort zone was convincing people to give him money for one really ambitious thing, and then using that money to achieve some other thing (that no one would have given him money for) that is sort of on the way, but which has commercial value to him.

    For example, he has repeatedly said his companies will deliver full self-driving cars by dates that have passed - and convinced investors to get him in a position to compete with companies like Toyota, promised a ‘hyperloop’ and got funding to compete with other horizontal drilling companies, promised to send people to mars and got to compete with other satellite technology companies.

    So making big promises paid off for him. For the investors, in terms of long term value, they might have been better off investing in existing companies he ended up competing with.

    But I suspect he is now outside his comfort zone, and might not even realise how far out of his depth he is.



  • As a former founder and moderator of a community that chose to host on Reddit, this is an even bigger red flag than the APIs.

    Someone deciding where to put the effort in to build a community would now be crazy to do it on Reddit. This unambiguously takes it from being the founder’s community that happens to be on Reddit, where they are investing their time to build their own community, to Reddit’s community where the founding moderator works at Reddit’s pleasure to serve Reddit for free. Reddit has already made other steps on that journey - but this would be the most overt wake up call that moderators are not free to shape their own community to a vision for how it will operate if they chose to host it on Reddit if it doesn’t please the masses.

    This is short-sighted, because people are going to found communities off Reddit instead. Tooling like Lemmy is only getting better, so Reddit is increasingly an unattractive choice.

    It is also unnecessary - if people don’t like the premise of a community, they can vote with their feet, and go to another one, and put in the effort to build the new community themselves. And applying the same principles to Reddit itself, if a majority of Reddit users say they don’t want Spez to be CEO any more, or don’t like the Board, should they be allowed to fire him, even if the shareholders don’t want to do that? Or should they just vote with their feet and go to Lemmy? If he is anything but a hypocrite, he should let the users decide if he will continue as CEO.