• 1 Post
  • 146 Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle

  • I mean comparatively to HDDs.

    Of course there are also challenges to making a high capacity SSD, but i don’t think they are using fundamentally new methods to achieve higher capacities. Yes they need to design better controllers and heat management becomes a larger factor, but the nand chips to my knowledge are still the same you’d see in smaller capacities. And the form factor has the space to accomodate them.

    If HDDs could just continue to stack more of the same platters into a drive to increase capacity they’d have a much easier time to scale.



  • Your claim that they would advertise it is speculation. What would be the purpose of that?

    To advertise that they can? In return what would be the purpose to hide it?

    They do seem to make their advancements at least somewhat public, e.g. with their recent progress with a EUV light source.

    I am probably on the pessimistic side and you maybe on the optimistic, so the reality will likely end up being somewhere in between (but only time will tell).

    China will do this because they have massive talent mass and ressources, and because they have to.

    Well it also was developed in the west by a large amount of talent and resources and still took a lot of time. But you are absolutely right that their hand is being forced.

    Restricting exports like this imo was a huge mistake, imo especially in regard to duv. In the end it might have achieved some damage in the short/medium term, but that wasn’t anything the us could capitalize on and it also directly hurt ASMLs profits (meaning less resources to advance). And regardless how the timeline ends up looking on the end (be it closer to your or my prediction), physics are the same everywhere so that can’t be restricted and they will eventually be able to figure it out.


  • GAA is the next evolution of transistor architecture from FinFET, but as far as I know has no direct link to smaller process nodes. In that (to my understanding) it doesn’t require small nodes and could be used just as easily in larger ones. It’s just that it is more difficult so until now there were other easier ways to make progress. However with new nodes getting more expensive and giving less scaling gaa and other things like backside power delivery are being pursued.

    We will have to see if the process is actually good, but I have little doubt that China will become competitive in EUV within 5 years. But if they have it already next year, that will be very fast.

    So not only do you expect China to have a working domestically produced EUV machine within 5 years, but a competitive one? Or possibly even next year?

    Next year is just pure fantasy that I don’t think even the most optimistic would assume. If they were anywhere close to that we would already know. They’d have shown a working prototype by now.

    Euv is crazy difficult and you not only the result of a single company ASML, but many highly specialized companies that are leaders in their respective fields and all over the world like e.g. Zeiss for for the lenses. So for China to replicate it domestically they’d need to copy the whole supply chain. Which is orders of magnitude more difficult than what they’ve done in other industries like electric vehicles or solar panels.

    Imo if they have a working prototype of a complete EUV machine within this decade it would already be impressive. But that would still be far off from mass production or wherever the industry is by then (Intel is currently trialing high na EUV). Also for reference Wikipedia says ASML had their first prototype in 2006 and we know how long it took to being that to mass production. China as a second mover might have an edge that speed things up, but just knowing how it works in theory isn’t enough and there are o shortcuts.

    But maybe they also pursue another technique such as nano imprint (like canon) to achieve smaller nodes. Maybe that would be easier to replicate without existing global supply chains.


  • Well there are claims that Huawei is aiming for 3nm with GAA with tape out next year See Here.

    I think we shouldn’t forget that the nm numbers really are just that: Numbers. They don’t correspond to any specific measurements and can be chosen more or less arbitrarily. So 6nm for example might just be a slightly refined 7nm node.

    Another thing is power efficiency and yields. If they get 4060 performance at terrible yields and with massive power draw then it is very different to getting there at similar parameters as Nvidia.

    If China does end up cracking EUV by themselves it would indeed be massive. It’s arguably one of the most complex things mankind has ever done. But there are so many factors to get right that tbh I don’t see it happening any time soon.




  • There’s currently a Kickstarter going on for a watch that aims to be modular and repairable. It’s called UNA Watch.

    Look interesting, but imo with these things it’s a bit of a chicken and egg problem, where the upgradeability/repairability only has value, if it is actually provided in the future (and economically viable). Something that can only be proven in time, but requires people to trust it before.

    I’m not in the market for a new watch right now, since I just repaired the screen on my Garmin, but am keeping an eye on it, since sadly Garmin seems to have entered the early stages of enshittification.


  • golli@lemm.eetoReddit@lemmy.worldWtf has happened to reddit?
    link
    fedilink
    arrow-up
    41
    ·
    edit-2
    6 months ago

    I’d say the most recent major influences were the IPO and the emergence of LLMs.

    Reddit becoming a publicly traded company and the preparation to do so certainly initiated a major shift in its priorities.

    Ai and large language models make it easier than ever to create shiny, but low quality content.

    And the rest is just reddit becoming more mainstream leading to an overall shift towards banal rather than niche topics.



  • Does this actually matter that much? I have a pixel 6a that has the visor style camera bump and with a case on it just disappears.

    And even if I’d use the phone without case Google’s bar shaped design still allows the phone to lay stable on a surface without wobble, just at a slight angle instead of flat. Which I guess would be an issue with other designs.



  • That’s pretty much me aswell, besides that I didn’t even spend energy to try and learn others. Simple docker compose, simple ui and easy way to add services.

    I am sure there are alternatives that allow for more elaborate setups and fancier things. But for the low effort I put into it, I got a page with some nice buttons with appropriate icons that scales to whatever screen size it’s displayed on. Only additional thing I did was enabled to show some basic info to see if e.g. SABnzbd is downloading something, which was also super easy.


  • If we are talking the manufacturing side, rather than design/software i am very curious to see how SIMC develops. You are absolutely right that there is a big advantage for the second mover, since they can avoid dead ends and already know on an abstract level what is working. And diminishing returns also help make gaps be slightly less relevant.

    However i think we can’t just apply the same timeline to them and say “they have 7nm now” and it took others x years to progress from there to 5nm or 3nm, because these steps include the major shift from DUV to EUV, which was in the making for a very long time. And that’s a whole different beast compared to DUV, where they are also probably still relying on ASML machines for the smallest nodes (although i think producing those domestically is much more feasible). Eventually they’ll get there, but i think this isn’t trivial and will take more than 2 years for sure.

    On the design side vs Nvidia the hyperscalers like Alibaba/Tencent/Baidu or maybe even a smaller newcomer might be able to create something competitive for their specific usecases (like the Google TPUs). But Nvidia isn’t standing still either, so i think getting close to parity will be extremely hard there aswell.


    Of course, the price gap will shrink at the same rate as ROCm matures and customers feel its safe to use AMD hardware for training.

    Well to what degree ROCm matures and closes the gap is probably the question. Like i said, i agree that their hardware seems quite capable in many ways, although my knowledge here is quite limited. But AMD so far hasn’t really shown that they can compete with Nvidia on the software side.


    As far as Intel goes, being slow in my reply helps my point. Just today Intel canceled their next-generation GPU Falcon Shore, making it an internal development step only. As much as i am rooting for them, it will need a major shift in culture and talent for them to right the ship. Gaudi 3 wasn’t successful (i think they didn’t even meet their target of $500mio sales) and now they probably don’t have any release in 2025, assuming Jaguar Lake is 2026 since Falcon Shore was slated for end of this year. In my books that is the definition of being behind more than 1 year, considering they are not even close to parity right now.


  • Yeah. I don’t believe market value is a great indicator in this case. In general, I would say that capital markets are rational at a macro level, but not micro. This is all speculation/gambling.

    I have to concede that point to some degree, since i guess i hold similar views with Tesla’s value vs the rest of the automotive Industry. But i still think that the basic hirarchy holds true with nvidia being significantly ahead of the pack.

    My guess is that AMD and Intel are at most 1 year behind Nvidia when it comes to tech stack. “China”, maybe 2 years, probably less.

    Imo you are too optimistic with those estimations, particularly with Intel and China, although i am not an expert in the field.

    As i see it AMD seems to have a quite decent product with their instinct cards in the server market on the hardware side, but they wish they’d have something even close to CUDA and its mindshare. Which would take years to replicate. Intel wish they were only a year behind Nvidia. And i’d like to comment on China, but tbh i have little to no knowledge of their state in GPU development. If they are “2 years, probably less” behind as you say, then they should have something like the rtx 4090, which was released end of 2022. But do they have something that even rivals the 2000 or 3000 series cards?

    However, if you can make chips with 80% performance at 10% price, its a win. People can continue to tell themselves that big tech always will buy the latest and greatest whatever the cost. It does not make it true.

    But the issue is they all make their chips at the same manufacturer, TSMC, even Intel in the case of their GPUs. So they can’t really differentiate much on manufacturing costs and are also competing on the same limited supply. So no one can offer 80% of performance at 10% price, or even close to it. Additionally everything around the GPU (datacenters, rack space, power useage during operation etc.) also costs, so it is only part of the overall package cost and you also want to optimize for your limited space. As i understand it datacenter building and power delivery for them is actually another limiting factor right now for the hyperscalers.

    Google, Meta and Amazon already make their own chips. That’s probably true for DeepSeek as well.

    Google yes with their TPUs, but the others all use Nvidia or AMD chips to train. Amazon has their Graviton CPUs, which are quite competitive, but i don’t think they have anything on the GPU side. DeepSeek is way to small and new for custom chips, they evolved out of a hedge fund and just use nvidia GPUs as more or less everyone else.



  • I have to disagree with that, because this solution isn’t free either.

    Asking them to regulate their use requires them to build excess capacity purely for those peaks (so additional machinery), to have more inventory in stock, and depending on how manual labor intensive it is also means people have to work with a less reliable schedule. With some processes it might also simply not be able to regulate them up/down fast enough (or at all).

    This problem is simply a function of whether it is cheaper to a) build excess capacity or b) build enough capacity to meet demand with steady production and add battery storage as needed.

    Compared to most manufacturing lines battery tech is relatively simple tech, requries little to no human labor and still makes massive gains in price/performance. So my bet is that it’ll be the cheaper solution.

    That said it is of course not a binary thing and there might be some instances where we can optimize energy demand and supply, but i think in the industry those will happen naturally through market forces. However this won’t be enough to smooth out the gap difference in the timing of supply/demand.


  • It’s a reaction to thinking China has better AI

    I don’t think this is the primary reason behind Nvidia’s drop. Because as long as they got a massive technological lead it doesn’t matter as much to them who has the best model, as long as these companies use their GPUs to train them.

    The real change is that the compute resources (which is Nvidia’s product) needed to create a great model suddenly fell of a cliff. Whereas until now the name of the game was that more is better and scale is everything.

    China vs the West (or upstart vs big players) matters to those who are investing in creating those models. So for example Meta, who presumably spends a ton of money on high paying engineers and data centers, and somehow got upstaged by someone else with a fraction of their resources.