Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Businesses The Almighty Buck The Internet

Intel To Cut IoT Jobs (electronicsweekly.com) 107

An anonymous reader shares a report: Intel is laying off people in its IoT group following its recent cuts to three of its IoT products -- the Joule, Edison and Galileo boards. 97 jobs are to be lost in Santa Clara and up to 40 more in Leixlip, Ireland. IoT accounts for less than 5% of Intel's sales.
This discussion has been archived. No new comments can be posted.

Intel To Cut IoT Jobs

Comments Filter:
  • So.... (Score:4, Insightful)

    by Anonymous Coward on Tuesday July 04, 2017 @10:03AM (#54741795)

    ...the IoT bubble exploded before being fully inflated ?

    • Re:So.... (Score:5, Insightful)

      by 0100010001010011 ( 652467 ) on Tuesday July 04, 2017 @10:21AM (#54741899)

      More like Intel was playing too much catchup to ARM, AVR, ESP8266, MIPS, PPC and the other embedded chipsets.

      Turns out "But it's x86!" isn't as much of a selling point as they thought it was.

      • So what was the product Intel was positioning for the IoT market? The 386SX? If they just took that design, added some level 1 cache and put it on their current most inexpensive process, they'd be optimal for it.

        Why wouldn't a 386 be much of a selling point, when every embedded OS out there - not just Linux or BSD, but also things like FreeDOS, QNX, Minix, Minuet, et al exists for that platform. If one is looking for flexibility in number of hardware sources, one can limit themselves to Linux & BS

        • To bring up a new system, getting the applicable device drivers is a much bigger deal than the choice of ARM vs x86 instruction set (which is mostly just a compiler switch). Unless your IoT device has the exact same hardware peripherals as a legacy PC, an x86 CPU doesn't buy you much over an ARM CPU.

          • intel tried iot with the curie chip, too.

            http://www.mouser.com/ProductD... [mouser.com]

            $20 for a failure of a chip. no one uses it. it has bugs and its internal features are not really competitive (or even functional, in some cases!)

            they made a tv show from it, too: https://en.wikipedia.org/wiki/... [wikipedia.org] but since it was not a success, they canceled season2 and there won't be any more.

            intel does not have the right people for this area. and they let go anyone who DID have a clue ;(

            oh, intel. sigh..

        • by sjames ( 1099 )

          Probably because you could buy 4 or more good ARM boards for the cost of a single Intel board.

          Intel discovered that nobody is going to pay the Intel surcharge in a field without a pile of legacy software.

          • But if you're doing the 386, there IS legacy software. While the latest Linux kernels may have dropped 386 support, it still has a bunch of legacy OSs that anyone can dig up.

            • by sjames ( 1099 )

              IoT itself doesn't have a lot of legacy software that needs x86. It's all embedded stuff and it's fairly new. It's also common to have the source code and so no problem compiling for whatever architecture is convenient.

        • The 386SX? If they just took that design, added some level 1 cache and put it on their current most inexpensive process, they'd be optimal for it.

          That would be extremely over-complex.
          x86 ISA isn't exactly a lean architecture and instruction set.
          Modern ARM can do much better with a small transistor foot print.

          But too bad, Intel discontinued their StrongARM serie.

          Why wouldn't a 386 be much of a selling point, when every embedded OS out there - not just Linux or BSD, but also things like FreeDOS, QNX, Minix, Minuet, et al exists for that platform.

          The main selling point of a x86 chip would be code compatibility.
          But nobody sane in their mind is going to try to run Windows XP on a IoT device.
          All the other OSes are also available on ARM.

          The other point where x86 shines is raw performance on high range CPUs (simply because Intel and AMD [x8

          • >x86 ISA isn't exactly a lean architecture and instruction set.
            >Modern ARM can do much better with a small transistor foot print.

            In which universe is the ARM instruction set "lean"?

            Every instruction is 32 bits long, clogging one's instruction bandwidth. A little Huffman encoding goes a long way. Gates are cheap, IO bandwidth is not. The benefits of the regularity of RISC instruction sets were quickly lost as gates got smaller and the compute/IO bandwidth tradeoff changed in favour of compute.

            • > Every instruction is 32 bits long, clogging one's instruction bandwidth.

              ARM Cortex-M processors use 16-bit instructions (Thumb and Thumb-2). They've had a while to optimise the instruction set for embedded and SoC.

              • > Every instruction is 32 bits long, clogging one's instruction bandwidth.

                ARM Cortex-M processors use 16-bit instructions (Thumb and Thumb-2). They've had a while to optimise the instruction set for embedded and SoC.

                Yes. Thumb. A major mode switch to use a smaller instruction. I've integrated a few ARMs into chips (first the ARM7TMDI) and they were pretty much a nightmare to bring into line with normal OS practices. 15 years later, everyone seems to think this cranky instruction set and system model is normal, because it's what they grew up with. Yet the funky interrupt model, the funky mode switching, the lack of standard device discovery (that Linus Torvalds complained about) and bandwidth hungry instructions do not

            • >x86 ISA isn't exactly a lean architecture and instruction set.
              >Modern ARM can do much better with a small transistor foot print.

              In which universe is the ARM instruction set "lean"?
              Every instruction is 32 bits long, clogging one's instruction bandwidth.

              In my universe where the *perfomance* that interests me is the power budget of the IoT device, which is rather closely related to how much the chip maker can cran in as little silicon as possible. the current generation of ARM chips simply provide more with less silicon (among chief reasons : the RISC instruction sets, and the constant instruction width that you've criticized makes the instruction pipeline much simpler) (whereas x86 chips tend to be a RISC-ish backend with a huge x86 interpreter on top of i

          • The 386SX? If they just took that design, added some level 1 cache and put it on their current most inexpensive process, they'd be optimal for it.

            That would be extremely over-complex. x86 ISA isn't exactly a lean architecture and instruction set. Modern ARM can do much better with a small transistor foot print.

            But too bad, Intel discontinued their StrongARM serie.

            Actually, that chip was rebranded as Xscale, and sold to Marvel 10 years ago. It's not that Intel didn't try working w/ it.

            Why wouldn't a 386 be much of a selling point, when every embedded OS out there - not just Linux or BSD, but also things like FreeDOS, QNX, Minix, Minuet, et al exists for that platform.

            The main selling point of a x86 chip would be code compatibility. But nobody sane in their mind is going to try to run Windows XP on a IoT device. All the other OSes are also available on ARM.

            The other point where x86 shines is raw performance on high range CPUs (simply because Intel and AMD [x86] are the only company spending R&D money on optimizing chips for that segment. Everybody else - Apple, Qualcom, etc. - are optimizing for the embed market) but that's absolutely NOT what's needed on IoT devices.

            Uh, no. QNX is x86 only, IIRC (unless RIM ported it for Blackberry), and Minuet is written specifically in x86 assembly, so that it could create the most compact code. While Minix is FOSS, it has only been/is being ported to the Beaglebone: if you wanna run it on a Raspberry or an Arduino, good luck!

            I also wouldn't call the tablet & phone markets 'embedded' - they are more of

        • 386 is a bear for IoT. Why would you want a bloated CISC system like that if you're not using an ISA bus or something similar?

          That would not compete with ARM using any process at all, and it would not be cheaper than a modern x86 to make other than having a lower transistor count.

          IoT you want a microcontroller or SoC, you don't really want a CPU that is going to need a bunch of other chips to provide required peripherals. And if you add that stuff in, now you don't have legacy code that can use it, and the

          • OK then, take a 486, plonk the chipset etc onto the same die, release as a PC compatible SoC.
            • OK then, take a 486, plonk the chipset etc onto the same die, release as a PC compatible SoC.

              And you have something slower than an ARM Cortex-M that uses more transistors, runs hotter, and is more difficult to program.

              And you better also hire a swarm of extra engineers to write code to include in on-chip ROM to run those peripherals.

              What you'll have is worse than what Intel already flounders offering. It sounds like a good idea, I understand that. I used to say the same thing before I started doing firmware programming and actually working with these things and reading the datasheets. Once you're f

              • The123king was not wrong. At current process nodes, Intel could start by taking either a 386 or 486 (they did experiments w/ the 80376 and 80386EX), putting it on one of their Altera FPGAs, putting the ISA or EISA bus on them, in fact, putting an entire legacy early 90s PC on it, w/ adequate RAM & flash, and there would be a load of software that would support it. All the early versions of Windows, FreeDOS, QNX, Minuet, Minix - platforms that for all practical purposes only exist on x86 but not really

                • It does have to be hotter. Physics. Modern micros are designed to need less power by the choice of instructions and features. That old IP block can't be made cool, you would need a new design. Perhaps you're one of those people who think "mobile" (laptop) CPUs are just a scam and that they're really the same?

                  Here is the thing: There are already System-on-Chip products that are full-featured. There is no demand for EISA or any of that, and new devices don't have drivers so you don't even get code reuse out o

      • They can't even keep up with Texas Instruments on features, and then they want an even higher price, when TI is already getting a premium over AVR and Espressif.

        There is basically no use case where they offer an advantage of any sort, unless you only like Intel hardware. If there is at least one other company you're willing to use, they probably have something better for less, and something else for even less than that.

      • In the same way that the 'S' in IoT stands for "security", so the "X" in IoT stands for "x86".
    • by Anonymous Coward

      No, it's just that Intel was too late to the game with an expensive, closed system. If you can't make a better product than the Pi or Arduino, then you're wasting your time.

    • investors will find someplace to cut costs to keep their toys.
  • Intel never had the right product focus for these IoT devices. Overall cost was too high for hobbyists, and the main product differentiation was basically "we're Intel instruction set compatible" in an age where others are offering JavaScript compatibility. I'm afraid as long as Intel makes their architecture out to be their main selling point they're going to be out of tune with these emerging markets. Same reason they missed the phone and tablet market, in my opinion.

  • whoda thunk it? (Score:5, Insightful)

    by Gravis Zero ( 934156 ) on Tuesday July 04, 2017 @10:26AM (#54741915)

    Promoting hopelessly overpriced boards in an area where x86 has no benefit in addition to having insufficient documentation wasn't the gamechanger they expected! If only someone knew why. -_-

    • I wouldn't say they were overpriced for what you got. Spec wise they were quite impressive.

      Unfortunately we'll never know if that translates to real world benefit since their documentation was so bloody poor no one ever managed to get anything running on them.

  • Core Competency (Score:5, Interesting)

    by AlanObject ( 3603453 ) on Tuesday July 04, 2017 @10:31AM (#54741931)

    I learned my lesson long ago with Intel back in the i960 days or maybe before that. With them it is all about the CPU chips. No matter what they say. The one exception are their Network Interface chips

    Here is the pattern: They use their unlimited money+market position+PR machine to fund some kind of tech, pump up a bunch of customers, trade groups, get projects started with generous relationships ("partnerships"), make lots of press.

    A year or two down the road it gets de-funded, spun-out, quietly quashed. The numbers weren't what they wanted so the inevitable corporate-level decision is to return to our "core competency" and that of course is selling CPU chips.

    Anyone who was sucked into designing something with their switch chip product line knows what I am talking about. Remember SSI? If you didn't you dodged a bullet. Infiniband? Network Processors? FPGAs? Then Over 2+ decades (starting with the i186) every 3-4 years they would venture into the embedded controller market just to pull back out of it again. Not Intel Core? Not committed.

    However their current product lineup for embedded is actually pretty damn good. Not only are their designs better thought out but market and ecosystem conditions are fortuitous for them. Most of all, it is now all about selling Intel i3/i5/i7-family CPUs. That alone will keep that line it alive.

    • by Anonymous Coward

      Intel's incompetence is well-known among Intel employees who had to suffer turbulent middle management changes every time one of the amateur executives decides to play CEO.

    • Infiniband

      I remember infiniband. It was committed to. Not only that it was THE interconnect of choice for the HPC space. The problem wasn't with Intel. The problem is that Infiniband was an off-standard solution, and then the standard caught up. It's like saying remember SCSI when talking about CD burners. I remember it, I also remember when ATA got good enough to use CD burners without having an additional card installed in my computer.

      Their problem with the IoT stuff is different yet again. It was a garbage product

      • by JanneM ( 7445 )

        Infiniband is the most common interconnect in the HPC space today though. Doesn't seem right to say it's a failure when it dominates the segment it was designed to handle. Or do you mean specifically the Intel implementation of it?

        • I didn't say it was a failure, the GP did for reasons unknown. But I am interested in your source saying it's the most popular today. I know it was 10 years ago, but I thought that changed to Ethernet. The last reference I could find was Wikipedia giving numbers of 181 of the top 500 using Infiniband in 2009, and I thought that one was already a downward trend.

          • by JanneM ( 7445 )

            No, more than half of the machines this year were using Infiniband. I get the impression (note: not hard data) that IB is pushing out 10G Ethernet on the lower end of the HPC field. The latency wins are worth it for a lot of applications.

    • Ain't the Core line of products way overkill for embedded? All one should need are 32-bits, and that would have a whole host of legacy software available for it

      FPGAs - didn't they acquire Altera? Looks like that's something that Intel's Custom Semiconductor division could readily use (if it doesn't already) for any design requests they get from semiconductor houses, and run a business on that based on volumes.

      On CPUs, Intel never succeeded w/ any of their non-x86 attempts, which was a pity. i960 went

  • Last week I met a startup that had developed a cool personal "AI-powered" robot that did offline voice recognition and motion tracking. When I asked about dev kits they said they had used Joule... the remainder of our chat would be best described by pregnant pauses.

  • Intel really needs to get it together, they are a CPU company and GPUs are tall the rage right now and you cannot buy a decent video card, out of stock and the companies cannot keep up with the demand. Intel could very simply make GPU cards (AMD does) and make a billion dollars over night and save the jobs of these hard working people. Just pisses me off they cut jobs instead of making what is needed and keeping the staff working.
    • by thegarbz ( 1787294 ) on Tuesday July 04, 2017 @03:25PM (#54743499)

      Intel could very simply make GPU cards

      Intel are the number one GPU company in the market.

      Oh you meant high-performance GPUs? Man now don't have a clue about the industry do you? Patents aside, you can't just turn around and plop a high performance GPU at a good cost out overnight. Or even overyear. There are many 10s of thousands of R&D manhours that go into GPUs and lots of those hours end up in the patent office preventing other people from using the same idea.

      • In fact I do have industry knowledge. Work with Intel's engineers a lot. What you may not know is that Intel and AMD were in a CPU war for many years. Then AMD started moving to GPU because it was losing the CPU war, Intel kept a close eye AMD's GPUs and kept up with the technology all these years. I wouldn't be very hard for Intel to tool up and make High End GPUs.
        • Then AMD started moving to GPU because it was losing the CPU war

          Yeah how did that go?
          Oh right spend $5.4bn buying a company, the expertise and associated patents from someone who was already tooled up to make high end GPUs.

          Intel kept a close eye AMD's GPUs and kept up with the technology all these years

          I was in Amsterdam last week and for the life of me I couldn't find anyone selling what you were smoking when you posted that.

  • No real surprise here. Said it before. [slashdot.org]

  • by Anonymous Coward

    IoT SoC requirements

    for most applications (switches, dimmers, dumb controllers, sensors) the following should be plenty:
    8 or 16bit instruction set (64bit is way overkill)
    1MB RAM - or perhaps tens or hundreds of KB

    BlueTooth LE

    hardware support required (to not eat battery) for:
    tamper-proof clock
    AES128 encrypt/decrypt
    ECDSA256 sign/verify
    1KB secure nonvolatile memory (unreadable but usable-by-reference) for provisioning keys, signing keys

    not required:
    floating point

    ideally, this should draw less than a milliwatt

    • by Anonymous Coward

      Embedded and IoT aren't one-size-fits-all.
      However, Intel really missed the mark on these initiatives.

      The strategy seems to have taken two approaches:
      1. Non-x86-compatible x86 chips, and;
      2. x86 PC, but it isn't really a PC because we don't give you a normal BIOS and only support a wonky dev toolchain which is quirkier than ARM.

      Both approaches suffered from excess complexity which increased cost, and a lack of easily usable documentation.

      Where Intel should have focused was:

      1. A standard PC on-a-chip microc

      • What is the oldest/largest process Intel currently runs - which they haven't retooled to newer shrinks? They could use that as the platform for their embedded products, and then build an SoC w/ the 386SX, 1MB of level 1 cache, 2GB of embedded RAM, 1MB of BIOS flash, built on the old 386SX package. Then such a system could support anything from a Minix setup to a Windows XP configuration used in ATMs.

        • The trouble with just going back to an old process is that there has been a lot of learning in how to make a useable process since then. (The trouble with newer processes being that you needed that learning just to stay afloat.)

          There has been interest of late in "retroscaling", where you take all the techniques and equipment that had to be developed for the smaller processes and select from that to make a new process at a larger node. That way you can chose what's easy to fabricate and design with for the i

  • So predictable that an AC comment on Slashdot predicted it a few weeks ago.

To stay youthful, stay useful.

Working...