Panther Lake and Nova Lake laptops will return to traditional RAM sticks

  • umami_wasabi@lemmy.ml
    link
    fedilink
    English
    arrow-up
    123
    arrow-down
    1
    ·
    19 days ago

    Reverting to RAM sticks is good, but not shutting down GPU line. GPU market needs more competiter, not less.

    • ChicoSuave@lemmy.world
      link
      fedilink
      English
      arrow-up
      35
      arrow-down
      5
      ·
      18 days ago

      Intel can’t afford to keep making GPUs because it doesn’t have the reliable CPU side to soak up the losses. The GPU market has established players and Intel, besides being a big name, didn’t bring much to the table to build a place for itself in the market. Outside of good Linux support (I’ve heard, but not personally used) the Intel GPUs don’t stand out for price or performance.

      Intel is struggling with its very existence and doesn’t have the money or time to explore new markets when their primary product is cratering their own revenue. Intel has a very deep problem with how it is run and will most likely be unable to survive as-is for much longer.

      • Jay@lemmy.world
        link
        fedilink
        English
        arrow-up
        27
        ·
        edit-2
        18 days ago

        As a Linux user of an Intel Arc card. I can safely say that the support is outstanding. In terms of price to performance, I think it’s pretty good too. I mainly enjoy having 16GB of VRAM and not spending $450-$500+ to get that amount like Nvidia. I know AMD also has cards around the same price that have that amount of VRAM too though

        • XTL@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          3
          ·
          18 days ago

          That’s interesting, thanks. Can I ask what that vram is getting used for? Gaming, llms, other computing?

          • Jay@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            17 days ago

            The main things that use up a lot of VRAM for me is definitely doing Blender rendering and shader compilation for things like Unreal Engine. My games probably would use a little more if I had any screen higher than 1080p. The most usage I’ve seen from a game was around 14Gb used

            I haven’t messed around with llms on the card just yet but I know that Intel does have an extension for PyTorch to do GPU compute. Having the extra VRAM would definitely be of help there

            • Jay@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              ·
              edit-2
              17 days ago

              That’s pretty much the lowest that I’ve found too.

              From what I could find, this is the lowest price per GPU manufacturer (For 16GB of VRAM)

              • Intel Arc A770: $260
              • Radeon RX 7600XT: $320
              • NVIDIA RTX 4060 Ti: $450
      • deegeese@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        23
        ·
        18 days ago

        It boggles the mind that AMD realized the importance of GPUs 20 years ago when they bought ATI and in all that time Intel still doesn’t have a competitive GPU.

        • ms.lane@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          18 days ago

          Intel realized it back then too, but things didn’t pan out the way they wanted.

          nVidia and AMD were going to merge while ATi was circling the drain. Then Jensen and Hector Ruiz got into their shitfight about who was going to be CEO of the marged AMD/nVidia (it should have been Jensen, Hector Ruiz is an idiot) which eventually terminated the merger.

          AMD, desperately needing a GPU side for their ‘future is fusion’ plans, bought the ailing ATi at a massive premium.

          Intel was waiting for ATi to circle the drain a little more before swooping in and buying them cheap, AMD beat them to it.

          • deegeese@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            5
            ·
            17 days ago

            That’s a slightly revisionist history. ATI was by no means “circling the drain”, they had a promising new GPU architecture soon to be released, and remember this because I bought ATI stock about 6 months before the merger.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            ·
            17 days ago

            ntel was waiting for ATi to circle the drain a little more before swooping in and buying them cheap, AMD beat them to it.

            They had strong iGPU performance, a stronger process node, and tons of cash. There’s no reason they couldn’t have built something from the ground up, they were absolutely dominating the CPU market. AMD didn’t catch up until 2017 or so when they launched the new Zen lineup.

            Intel sat on their hands raking in cash for 10+ years before actually getting serious about things, and during that time, Nvidia was wiping the floor w/ AMD. There’s absolutely no reason Intel couldn’t have taken over the low-end GPU market with a super strong iGPU, and used the same architecture for a mid-range GPU. I bought Intel laptops w/o a dGPU because the iGPU was good enough for light gaming. I stopped once AMD’s APUs caught up (bought the 3500U), and I don’t see a reason why I’ll consider Intel for a laptop.

            Intel lost because they sat on their hands. They were late to making an offer on ATI, they were late in building their own GPUs, and they’re still late on anything touching AI. They were dominant for well over a decade, but instead of doing R&D on areas near their core competencies (CPUs), they messed around with SSD and other random stuff.

            • ms.lane@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              17 days ago

              They needed the IP.

              You can’t just build a 3D accelerator. It’s billions of dollars in licensing basic building blocks.

              Easiest way to get in is to buy your way in.

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                16 days ago

                Yet they were capable of building one over the last decade or so with their Arc GPUs. I’m saying they should have started a decade earlier.

      • atempuser23@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        18 days ago

        Basically there is only money at the top of the gpu range. Everything else is a budget card with razor thin margins.

        AI specific chips will take off over time but even then the ship is starting to sail . These are mostly data center projects.

      • Wooki@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        18 days ago

        besides being a big name, didn’t bring much.

        Absolutely wrong. A lot of old and dated information in your post.

        They have something no one else has: manufacturing, and very low price and great performance after recent driver updates. They just lack the driver stability which has been making leaps and bounds.

        I do not think anyone else can enter the market, let alone with an edge.

      • LavenderDay3544@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        17 days ago

        Intel is too big to fail. And the defense sector needs an advanced domestic foundry. Uncle Sam will bail it out with our tax money.

        • ChicoSuave@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          17 days ago

          The United States has a few chip fabs that are capable of making military grade hardware. It’s helpful that the defense industry uses chips which aren’t the most advanced possible - they want the reliability mature tech provides. Micron, Texas Instruments, ON semiconductor - there are a few domestic chip companies with stateside fabs.

          Intel is also a valuable collection of patents and a huge number of companies would love to get them. Someone will want to step in before the government takes over.

          • LavenderDay3544@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            17 days ago

            Intel is the only US based and owned foundry that is on the leading edge of fab process technology. That’s what the government wants domestically. Defense isn’t just military and certain intelligence and similar functions need high performance hardware. I somehow don’t think the NSA is using CPUs made on Northrop Grumman’s 180 nm planar CMOS process. Army radios might use that shit but the highest tech defense and intelligence agencies are using modern hardware. Intel is the best option for manufacturing it.

            TSMC could be an option now with its US based GIGAFABs but it would be a much more complex deal with the US government where chips made for it would have to be made entirely in the US and possibly by a US domiciled subsidiary instead of TSMC’s main Taiwan based parent company. The same goes for Samsung.

  • _____@lemm.ee
    link
    fedilink
    English
    arrow-up
    111
    arrow-down
    2
    ·
    18 days ago

    coming up next: Intel fires 25% of their staff, CEO gets a quarterly bonus in the millions

  • Riskable@programming.dev
    link
    fedilink
    English
    arrow-up
    98
    arrow-down
    2
    ·
    18 days ago

    Gelsinger said the market will have less demand for dedicated graphics cards in the future.

    No wonder Intel is in such rough shape! Gelsinger is an idiot.

    Does he think that the demand for AI-accelerating hardware is just going to go away? That the requirement of fast, dedicated memory attached to a parallel processing/matrix multiplying unit (aka a discreet GPU) is just going to disappear in the next five years‽

    The board needs to fire his ass ASAP and replace him with someone who has a grip on reality. Or at least someone who has a some imagination of how the future could be.

    • TheGrandNagus@lemmy.world
      link
      fedilink
      English
      arrow-up
      71
      ·
      edit-2
      18 days ago

      Gelsinger said the market will have less demand for dedicated graphics cards in the future.

      Reminds me of decades ago when intel didn’t bother getting into graphics because they said pretty soon CPUs would be powerful enough for high-performance graphics rendering lmao

      The short-sightedness of Intel absolutely staggers me.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        48
        ·
        edit-2
        18 days ago

        CPUs would be powerful enough for high-performance graphics rendering lmao

        And then they continued making 4 core desktop CPU’s, even after phones were at deca-core. 🤣🤣🤣

        • Trainguyrom@reddthat.com
          link
          fedilink
          English
          arrow-up
          11
          ·
          edit-2
          18 days ago

          To be fair, the arm SOCs on phones use BigLittle cores, where it will enable/disable cores on the fly and move software around so it’s either running on the Big high performance cores or the Little low power cores based on power budget needs at that second. So effectively not all of those 6+ cores would be available and in use at the same time on phones

          • Buffalox@lemmy.world
            link
            fedilink
            English
            arrow-up
            13
            ·
            edit-2
            18 days ago

            True, but I use the phone reference to show how ridiculous it is that Intel remained on 4 cores for almost 8 years.
            Even Phenom was available with 6 good cores in 2010, yet Intel remained on 4 for almost 8 years until Coffee Lake came out late 2017, but only with 6 cores against the Ryzen 8.
            Intel was pumping money from their near monopoly for 7 years, letting the PC die a slow death of irrelevancy. Just because AMD FX was so horrible their 8 Buldozer cores were worse than 4 Core2 from Intel. They were even worse than AMDs own previous gen Phenom.
            It was pretty obvious when Ryzen came out that the market wanted more powerful processors for desktop computers.

      • ChicoSuave@lemmy.world
        link
        fedilink
        English
        arrow-up
        27
        ·
        18 days ago

        It’s been the same “vision” since the late 90s - the CPU is the computer and everything else is peripherals.

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      18 days ago

      Does he think that the demand for AI-accelerating hardware is just going to go away? That the requirement of fast, dedicated memory attached to a parallel processing/matrix multiplying unit (aka a discreet GPU) is just going to disappear in the next five years‽

      Maybe the idea is to put it on the CPU/NPU instead? Hence them going so hard on AI processors in the CPU, even though basically nothing uses it.

      • bruhduh@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        18 days ago

        But if he wants npu then why not buff igpu too? I mean, igpu exclusive on CPU memory is good boost, look up intel i7 8709g they put AMD Radeon vega igpu and exclusive to igpu 4gb of hbm memory, it did wonders, now when AMD is winning in apu sector, they could utilise same ideas they did in the past

        • Trainguyrom@reddthat.com
          link
          fedilink
          English
          arrow-up
          7
          ·
          18 days ago

          Seriously putting a couple gigs of on-package graphics memory would completely change the game, especially if it does some intelligent caching and uses RAM for additional memory as needed.

          I want to see what happens if Intel or AMD seriously let a generation rip with on package graphics memory for the iGPU. The only real drawback I could see is if the power/thermal budget just isn’t sufficient and it ends up with wonky performance (which I have seen on an overly thin and light laptop I have in my personal fleet. It’s got a Ryzen 2600 by memory that’s horribly thermally limited and because of that it leaves so much performance on the table)

    • ColeSloth@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      5
      ·
      18 days ago

      Probably because APU’s are getting better and more pc gamers are doing handhelds and apu laptops instead of dedicated desktops. PC gaming has gotten really expensive.

      This is a non comparison for at least the next 5 years. A dedicated gpu is still a better choice hands down for gaming. Even going on a lower end build an older gpu will still beat the current best apu by a good amount, but in 10 years time it may not be so necessary to need a gpu over an apu. GPUs are getting too power hungry and expensive. Gamers gonna game, but they won’t all want to spend an ever increasing amount of money to get better graphics, and arc would need at least another 5 years to be competition enough to claim a worthwhile market share from amd or nvidia and that’s wishful thinking. Long time to bleed money on a maybe.

      • ms.lane@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        18 days ago

        You think Intel is going to have 500-850mm^2 dies?

        That’s what they need to compete in the GPU space.

  • randomaside@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    42
    ·
    edit-2
    18 days ago

    I don’t think Lunar lake wasn’t a “mistake” so much as it was a reaction. Intel couldn’t make a competitive laptop chip to go up against Apple and Qualcomm. (There is a very weird love triangle between the three of them /s.) Intel had to go to TSMC to get a chip to market that satisfied this AI Copilot+ PC market boom(or bust). Intel doesn’t have the ability to make a competitive chip in that space (yet) so they had to produce lunar lake as a one off.

    Intel is very used to just giving people chips and forcing them to conform their software to the available hardware. We’re finally in the era where the software defines what the cpu needs to be able to do. This is probably why Intel struggles. Their old market dominant strategy doesn’t work in the CPU market anymore and they’ve found themselves on the back foot. Meanwhile new devices where the hardware and software are deeply integrated in design keep coming out while Intel is still swinging for the “here’s our chip, figure it out for us” crowd.

    In contrast to their desktop offerings, looking at Intel’s server offerings shows that Intel gets it. They want to give you the right chips for the right job with the right accelerators.

    He’s not wrong that GPUs in the desktop space are going away because SoCs are inevitably going to be the future. This isn’t because the market has demanded it or some sort of conspiracy, but literally we can’t get faster without chips getting smaller and closer together.

    Even though I’m burnt on Nvidia and the last two CPUs and GPUs I’ve bought have been all AMD, I’m excited to see what Nvidia and mediatek do next as this SOC future has some really interesting upsides to it. Projects like ashai Linux proton project and apple GPTK2 have shown me the SoC future is actually right around the corner.

    Turns out, the end of the x86 era is a good thing?

    • schizo@forum.uncomfortable.business
      link
      fedilink
      English
      arrow-up
      15
      ·
      18 days ago

      contrast to their desktop offerings

      That’s because server offerings are real money, which is why Intel isn’t fucking those up.

      AMD is in the same boat: they make pennies on client and gaming (including gpu), but dumptrucks of cash from selling Epycs.

      IMO, the Zen 5(%) and Arrow Lake bad-for-gaming results are because uarch development from Intel and AMD are entirely focused on the customers that pay them: datacenter and enterprise.

      Both of those CPU families clearly show that efficiency and a focus on extremely threaded workloads were the priorities, and what do you know, that’s enterprise workloads!

      end of the x86 era

      I think it’s less the era of x86 is ended and more the era of the x86 duopoly putting consumer/gaming workloads first has ended because, well, there’s just no money there relative to other things they could invest their time and design resources in.

      I also expect this to happen with GPUs: AMD has already given up, and Intel is absolutely going to do that as soon as they possibly can without it being a catastrophic self-inflicted wound (since they want an iGPU to use). nVidia has also clearly stopped giving a shit about gaming - gamers get a GPU a year or two after enterprise has cards based on the same chip, and now they charge $2000* for them - and they’re often crippled in firmware/software so that they won’t compete with the enterprise cards as well as legally not being allowed to use the drivers in a situation like that.

      ARM is probably the consumer future, but we’ll see who and with what: I desperately hope that nVidia and MediaTek end up competitive so we don’t end up in a Qualcomm oops-your-cpu-is-two-years-old-no-more-support-for-you hellscape, but well, nVidia has made ARM SOCs for like, decades, and at no point would I call any of the ones they’ve ever shipped high performance desktop replacements.

      • Yes, I know there’s a down-stack option that shows up later, but that’s also kinda the point: the ones you can afford will show up for you… eventually. Very much designed to push purchasers into the top end.
    • Trainguyrom@reddthat.com
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      18 days ago

      He’s not wrong that GPUs in the desktop space are going away because SoCs are inevitably going to be the future. This isn’t because the market has demanded it or some sort of conspiracy, but literally we can’t get faster without chips getting smaller and closer together.

      While I agree with you on a technical level, I read it as Pat Gelsinger intends to stop development of discrete graphics cards after Battlemage, which is disappointing but not surprising. Intel’s GPUs while incredibly impressive simply have an uphill battle for desktop users and particularly gamers to ensure every game a user wishes to run can generally run without compatibility problems.

      Ideally Intel would keep their GPU department going because they have a fighting chance at holding a significant market share now that they’re past the hardest hurdles, but they’re in a hard spot financially so I can’t be surprised if they’re forced to divest from discrete GPUs entirely

      • randomaside@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        9
        ·
        18 days ago

        I would like to see further development but I always had a sneaking suspicion that its life was limited due to the fact that ARC does not come from Intel’s fabs either. Like lunar lake, Arc is also made at TSMC.

  • 1rre@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    35
    ·
    18 days ago

    And here I was thinking Arc and storage were the only semi-competitive wings of intel… They just needed a couple of years for adoption to increase

    • Buffalox@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      18 days ago

      I’ve commented many times that Arc isn’t competitive, at least not yet.
      Although they were decent performers, they used twice the die size for similar performance compared to Nvidia and AMD, so Intel has probably sold them at very little profit.
      Still I expected them to try harder this time, because the technologies to develop a good GPU, are strategically important in other areas too.
      But maybe that’s the reason Intel recently admitted they couldn’t compete with Nvidia on high end AI?

      • InverseParallax@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        18 days ago

        Arcs are OK, and the competition is good. Their video encode performance is absolutely unworldly though, just incredible.

        Mostly, they help bring the igpu graphics stack and performance up to full, and keep games targeting them well. They’re needed for that alone if nothing else.

          • InverseParallax@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            18 days ago

            I mean fine, but first gen, they can fix the features and yields over time.

            First gen chips are rarely blockbusters, my first gen chips were happy to make it through bringup and customer eval.

            Worse because software is so much of their stack, they had huge headroom to grow.

            • Buffalox@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              18 days ago

              First gen chips are rarely blockbusters

              True, yet Nvidia was a nobody that arrived out of nowhere with the Riva graphics cards, and beat everybody else thoroughly. ATi, S3, 3Dfx, Matrox etc.

              But you are right, these things usually take time, and for instance Microsoft was prepared to spend 10 years without making money on Xbox, because they saw it had potential in the long run.

              I’m surprised Intel consider themselves so hard pressed, they are already thinking of giving up.

              • InverseParallax@lemmy.world
                link
                fedilink
                English
                arrow-up
                11
                ·
                18 days ago

                True, yet Nvidia was a nobody that arrived out of nowhere with the Riva graphics cards, and beat everybody else thoroughly. ATi, S3, 3Dfx, Matrox etc.

                Actually, they didn’t.

                This was their first: https://en.wikipedia.org/wiki/NV1

                Complete failure, overpriced, undercapable, was one of the worst cards on the market at the time, and used quadratics instead of triangles.

                NV2 was supposed to power the dreamcast, and kept the quads, but was cancelled.

                But the third one stayed up! https://youtu.be/w82CqjaDKmA?t=23

                • Buffalox@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  edit-2
                  18 days ago

                  You are right.

                  and used quadratics instead of triangles.

                  Now that you mention it, I remember reading about that, but completely forgot.
                  I remembered it as the Riva coming out of nowhere. As the saying goes, first impressions last. And I only learned about NV1 much later.

                  But the third one stayed up!

                  👍 😋

                  But Intel also made the i815 GPU, So Arc isn’t really the first.

      • hamsterkill@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        18 days ago

        Still I expected them to try harder this time, because the technologies to develop a good GPU, are strategically important in other areas too

        I think I read somewhere that they’re having problems getting AIB partners for Battlemage. That would be a significant impediment for continuing in the consumer desktop market unless Battlemage can perform better (business-wise) than Alchemist.

        They probably will continue investing in GPU even if they give up on Arc, it might just be for the specialized stuff.

      • 1rre@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        18 days ago

        Yeah true, plus I bought my a770 at pretty much half price during the whole driver issues and so eventually got a 3070 performing card for like $250, which is an insane deal for me but no way intel made anything on it after all the rnd and production costs

        The main reason Intel can’t compete is the fact CUDA is both proprietary and the industry standard, if you want to use a library you have to translate it yourself which is kind of inconvenient and no datacentre is going to go for that

        • Buffalox@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          18 days ago

          The main reason Intel can’t compete is the fact CUDA is both proprietary and the industry standard

          AFAIK the AMD stack is open source, I’d hoped they’d collaborate on that.

          • 1rre@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            18 days ago

            I think intel support it (or at least a translation layer) but there’s no motivation for Nvidia to standardise to something open-source as the status quo works pretty well

        • Trainguyrom@reddthat.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          18 days ago

          The main reason Intel can’t compete is the fact CUDA is both proprietary and the industry standard

          Funnily enough this is actually changing because of the AI boom. Would-be buyers can’t get Nvidia AI cards so they’re buying AMD and Intel and reworking their stacks as needed. It helps that there’s also translation layers available now too which translate CUDA and other otherwise vebdor-specific stuff to the open protocols supported by Intel and AMD

  • woodgen@lemm.ee
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    2
    ·
    18 days ago

    Gelsinger said the market will have less demand for dedicated graphics cards in the future.

    In other news, Intel is replaced by Nvidia in the Dow Jones, a company that exclusively produces dedicated graphics cards: https://lemmy.world/post/21576540

    • ne0phyte@feddit.org
      link
      fedilink
      English
      arrow-up
      9
      ·
      18 days ago

      Nvidia does more than just GPUs.

      Nvidia makes both SoCs like the Tegra series and server CPUs (Grace; ARM based to be used with their ML/AI cards with much higher bandwidths than regular CPUs).

      Nvidia also just announced that they are working on a consumer desktop CPU.

    • SaharaMaleikuhm@feddit.org
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      18 days ago

      Well I had the same thought a few years ago when APUs started getting better. But then I’m not the CEO of a huge tech company, so nobody lost their job because I was wrong.

    • humanspiral@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      17 days ago

      Historical success/performance of DOW is biased by these strategic decisions to replace losers before their bankruptcy but where decline is obvious.

      • humanspiral@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        17 days ago

        Intel is a CIA champion. Vector for backdoor spying and kill switches. Why not embed plastic explosives on every motherboard, since US/Trump praised the Israel strategy?

        Taiwan declaring independence and offering to host US nuclear missile bases… incoming.

    • XTL@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      18 days ago

      Intel has been a mistake since 1978. But evil doesn’t generally die.

  • The Hobbyist@lemmy.zip
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    18 days ago

    I’m wondering, the integrated RAM like Intel did for Lunar Lake, could the same performance be achieved with the latest CAMM modules? The only real way to go integrated to get the most out of it is doing it with HBM, anything else seems like a bad trade-off.

    So either you go HBM with real bandwidth and latency gains or CAMM with decent performance and upgradeable RAM sticks. But the on-chip ram like Intel did is neither providing the HBM performance nor the CAMM modularity.

    • realitista@lemm.ee
      link
      fedilink
      English
      arrow-up
      14
      ·
      18 days ago

      I wonder why both isn’t possible, build some into the chip but leave some DIMMs for upgradeability too at bit lower speed.

      • Trainguyrom@reddthat.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        18 days ago

        Especially with how normal memory tiering is nowadays, especially in the datacenter (Intel’s bread and butter) now that you can stick a box of memory on a CXL network and put the memory from your last gen servers you just retired into said box for a third or fourth tier of memory before swapping. And the fun not tiered memory stuff the CXL enables. Really CXL just enables so much cool stuff that it’s going to be incredible once that starts hitting small single row datacenters

    • just_another_person@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      18 days ago

      The transfer speed isn’t the big issue, it’s the density and reliability. Packing more heat generating stuff onto the SoC package just makes it more difficult to dissipate. The transfer of data to where it needs to be is still the same, so the trade-off is pretty null in that sense except reduction of overall power consumption.

  • ravhall@discuss.online
    link
    fedilink
    English
    arrow-up
    10
    ·
    18 days ago

    Blaming loss on SoC? Lmfao. SoC is better. Just stop offering a lower tier and make all SoC 32gb+

    … looking at you too, Apple.

  • RedWeasel@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    18 days ago

    I see the idea of Intel dropping arc as good news for AMD. Intel was going to chip at AMD’s marketshare well before Nvidia’s. It would be better to have more competition though.

    • bruhduh@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      18 days ago

      AMD would never close their GPU department because they sell their apu to Xbox, playstation, steam deck

      • ultranaut@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        18 days ago

        Intel could have conceivably competed with them there too. If they were still a competent business and not in a crisis of mismanagement. Its amazing how much better AMD is managed compared to Intel.

      • RedWeasel@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        18 days ago

        I wasn’t saying AMD would shut down, but that Intel would take market share from them before truely affecting Nvidia’s market share. ie AMD and Intel would be fighting over the same 25ish% of the pc market.

  • Treczoks@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    ·
    18 days ago

    With RAM access being the one big bottleneck of a modern PC, can anyone in the know tell me about those SoCs? How much RAM did they have, and was it faster than external DIMMs?

    • waitmarks@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      17 days ago

      You shouldn’t be comparing with DIMMs, those are a dead end at this point. CAMMs are replacing DIMMs and what future systems will use.

      Intel likely designed Lunar lake before the LPCAMM2 standard was finalized and why it went on package. Now that LPCAMMs are a thing, it makes more sense to use those as they provide the same speed benefits while still allowing user replaceable RAM.

    • humanspiral@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      17 days ago

      For laptops, it seems like a winner to me. Do you need to expand your laptop’s GPU memory? It was justified by lower power, and likely better transfer rates.

  • bruhduh@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    18 days ago

    They have to try revive their idea like they did in intel core i7 8709g first though