Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
United Kingdom

UK Government Department Still Runs VME Operating System Installed In 1974 189

Qedward writes: The UK government's Department for Work and Pensions is on the hunt for a new £135,000-a-year CTO, with part of their annual budget of £1 billion and responsibility for DWP's "digital transformation" to oversee the migration of the department's legacy systems which are still run on Fujitsu mainframes using the VME operating system installed in 1974.
This discussion has been archived. No new comments can be posted.

UK Government Department Still Runs VME Operating System Installed In 1974

Comments Filter:
  • by jfdavis668 ( 1414919 ) on Thursday January 08, 2015 @11:49AM (#48765523)
    Hackers probably couldn't even find a manual for one.
  • Modern Technology (Score:5, Interesting)

    by Galaga88 ( 148206 ) on Thursday January 08, 2015 @11:50AM (#48765539)

    How many modern systems can anybody imagine still working and apparently doing what we need them to 40 years from now?

    • Re:Modern Technology (Score:4, Interesting)

      by jbolden ( 176878 ) on Thursday January 08, 2015 @11:55AM (#48765591) Homepage

      Give me what that system cost in 1974 inflation adjusted dollars and I'll be happy to flip out a modern system every year. Using cheap less durable components with redundancy is a better strategy. I live in a 1830s house so I get the advantages of good quality construction. But if I were building a house I'd use 2014 cheap materials.

      • by gstoddart ( 321705 ) on Thursday January 08, 2015 @12:33PM (#48765999) Homepage

        Give me what that system cost in 1974 inflation adjusted dollars and I'll be happy to flip out a modern system every year.

        Sorry, I'm calling complete and utter bullshit.

        I've worked on enough legacy systems to know they didn't start off with some astronomical budget. They built it based on a set of requirements, coded it in house, and then it gradually expanded over many years of service.

        Mainframe applications aren't sexy or glamorous, they're built on relatively simple interfaces, and slowly expand in scope over time.

        They keep running because eventually they're woven into fabric of every other business process you have until they become something you can't trivially get rid of ... because every other damned thing relies on it even if it isn't obvious to the user. You end up having to replace everything

        My experience with migrating from legacy apps says you'd churn out a half asses solution, which isn't compatible with the existing stuff, and which can't be made so, and which would eventually be abandoned as untenable.

        You'd produce some solution which might be good if it didn't depend on throwing away every other system which touched this.

        The vast majority of people who claim they could produce a functional replacement for legacy software in a short period of time have never been involved in that kind of process.

        If it was easy, they'd have replaced it by now.

        The problem with looking for a "track record of transitioning a large enterprise from ageing mainframe technologies to next generation web, social, mobile cloud, Big Data and deep learning technologies" is that it's a set of requirements written by idiots who don't want to replace the system, they want something completely different which will involve re-tooling everything else that touches this existing system.

        Put your money where your mouth is, apply for the damned job.

        • by kenh ( 9056 )

          My experience with migrating from legacy apps says you'd churn out a half asses solution, which isn't compatible with the existing stuff, and which can't be made so, and which would eventually be abandoned as untenable.

          This is something the federal government proves over and over again... The current tax code is a staggering collection of decades-old COBOL code, the air traffic control systems until very recently ran on vacuum tube computers, and the FBI has tried, and failed, repeatedly to transition off a

          • by gstoddart ( 321705 ) on Thursday January 08, 2015 @01:09PM (#48766477) Homepage

            Well, the problem happens when some technology evangelist or manager who doesn't know a damned thing about the existing system claims it's easy to migrate it to modern tools.

            And neither the customer, nor the guy saying it's easy, has the barest clue about just how many other things depend on that system, and nobody can fully enumerate the functionality and corner cases.

            And then you end up trying to shoe-horn a purpose built piece of software which has ran fine for decades into a modern paradigm, and realize you are failing utterly.

            Because the modern tools usually simply can't accommodate all of the rules and logic in that system. They can't be cajoled into having enough flexibility, or simply can't do the same task.

            People consistently underestimate just how well these systems do their job, and just how many little corner cases and integration points have been woven into them over the years. The platform is no longer elegant, or easy to explain, but it just keeps working. But dozens of other things rely on it, and if you change the underlying thing you rebuild everything else.

            I've been on several projects trying to replace stuff built in the 60's and 70's -- and I wouldn't go near another one without very loudly saying how much risk is involved. Hell, even a system which has been around only since the 90s might be non-trivial to migrate away from -- precisely because in the 90s people were still building much more purpose-specific software.

            It's a catch 22 ... they get increasingly difficult to maintain, but they sometimes are impossible to replace.

            As I said, if it was easy to replace these systems, it would have been done already. Discovering just how difficult this can be has been the downfall of many a naive person who claims it's an easy thing to do.

          • by jbolden ( 176878 )

            I actually people on the IRS system. It isn't staggeringly complex it is moderately complex. They also have a lot of dysfunctional management, lack of proper skills, unclear budgeting all the way up, changing objectives, and unnecessary requirements all of which drive the costs sky high.

        • by sjames ( 1099 )

          It certainly can be done. The problem is that it requires highly skilled developers committed to a multi-year, multi-phase project. It tends not to be done because management is rarely willing to commit to such a project and isn't willing to pay to have a full analysis and scope put together. No sane developer is going to be willing to do such a complex and detailed analysis for free up front knowing how likely it is that the paying work either won't get done or will be farmed out to code monkeys working fr

          • by King_TJ ( 85913 )

            Yeah, ANYTHING is possible given enough talent, dedication and funding.

            In reality though, even these multi-year, multi-phase implementations tend to go way over budget and fail to yield everything promised.

            I've seen it happen, first-hand, when a company I worked for decided to implement a new ERP system and phase out a number of other applications and processes. They DID shell out the money to get the analysis done properly, but the problem really came in with ability for the new software to perform as inte

        • by jbolden ( 176878 )

          I and the GP were talking about the cost of the mainframe not the application layer at all.

          As for legacy conversions click on my link I've done dozens of them and quite successfully. Absolutely capturing business rules is a big deal. And frankly most mainframe applications were labor inefficient in their construction. That doesn't mean $100m worth of programming can be replaced for $1m but it can be replaced for $10m.

        • That's a great point. The evolution of life has worked the same way. There are some proteins which are interacted with and depended on by so many other proteins that changing them would be catastrophic; I happened to be reading about tubulin and actin [nih.gov] today:

          The likely explanation is that the structure of the entire surface of an actin filament or microtubule is constrained because so many other proteins must be able to interact with these two ubiquitous and abundant cell components. A mutation in actin th

      • by Ed Avis ( 5917 )
        I think you're missing the point. It is not about hardware durability. The original hardware installed in 1974 has long since been replaced (probably several times over). It is the software that costs money over the long term - hiring programmers to maintain it. And it is the software that is the reason the system hasn't been replaced with something else.
        • by jbolden ( 176878 )

          I think the GP was talking about the hardware. If not then on every Unix box there are routines still running from 1974. For example VI/VIM is an extension of ED (ED -> EX -> VI -> VIM) which is 1971.

    • by guruevi ( 827432 )

      Depends on what you declare as modern. I have a Sun UltraSPARC box from the mid-90's which is still used to cross-compile things. Some things just stick around especially in government and research but also in established businesses, things are kept alive for decades because there is no funding to replace it and for most projects the people that maintain it are cheaper than establishing a new project.

      This is mainly due to the inbreeding and subsequent incompetence on behalf of the people in charge of financ

    • I had a Black MacBook (2006) that ran for eight years until the CPU fan went kablooey. The only reason I didn't take it down to the Apple Store to get it repaired was the obsolete 32-bit CPU. Newer updates for installed software are now 64-bit only.
    • Re:Modern Technology (Score:5, Interesting)

      by TheGratefulNet ( 143330 ) on Thursday January 08, 2015 @12:21PM (#48765839)

      http://www.eevblog.com/forum/t... [eevblog.com]

      nice old classic tek test gear. highly in demand by collectors and those who appreciate good old fashioned engineering and build quality. the last of the 'repairable' tek scopes, pretty much (and even this is borderline repairable, with many custom chips).

      still, a few new caps, a new battery backed nvram module and you have another 20 or 30 yrs left on this scope.

      search that same forum for other old test gear (power designs (brand) power supplies are also built like tanks and run forever. I have 4 of them at home in my lab and they date from the mid 50's to early 60's. still hold their precision and would cost $5k to $10k today if you could even buy them.

      I have audio gear that I personally built in the 70's and 80's that still runs fine (hafler amps, etc).

      today, its hard to find things built to last, but it USED to be the norm "before your mother was born", so to speak.

      • I learned electronics on those on old Tek scopes in early 1990's. One day I slapped the scope in the side because it wasn't working right. My instructor came over and told me to never ever slap the scope under any circumstances. A moment later he slapped the scope and the problem went away. Go figure.
        • by mrbester ( 200927 ) on Thursday January 08, 2015 @01:06PM (#48766443) Homepage

          Because, as a student, if you hit it and it breaks you did something dumb and reduce the number of units for the class to use. However, as an instructor, if he hits it and it breaks, it was due for replacement.

        • by itzly ( 3699663 )
          A novice was trying to fix a broken Lisp machine by turning the power off and on.

          Knight, seeing what the student was doing, spoke sternly: "You cannot fix a machine by just power-cycling it with no understanding of what is going wrong."

          Knight turned the machine off and on.

          The machine worked.

          http://en.wikipedia.org/wiki/Hacker_koan
      • today, its hard to find things built to last, but it USED to be the norm "before your mother was born", so to speak.

        Well, no. While it's true that the race to the bottom has increased over the last few decades - things pretty much have always been "built to sell", and if they lasted that was a bonus rather than a design feature.

    • by Dr. Evil ( 3501 ) on Thursday January 08, 2015 @12:25PM (#48765881)

      Port it to minecraft. There seems to be some good 1970's CS work happening there.

    • by TWX ( 665546 )
      We've seen screencaps of 14-year uptimes on Cisco 2500-series Routers before, so I'd bet that a lot of networking equipment, if high quality to begin with, could make it that long, assuming that it's still doing what the users need.
    • Very few, sadly. They aren't designed to be long-running workhorses.
  • old != bad (Score:5, Informative)

    by AndroSyn ( 89960 ) on Thursday January 08, 2015 @11:51AM (#48765543) Homepage

    My money is on this VME system being around for another 20 years while the mess of Java and Oracle(you know they're going to use Oracle). It'll be overpriced, late and won't actually work.

    Just because something is old, doesn't mean it needs replaced. In short, why not just upgrade the mainframe?

    • Re:old != bad (Score:5, Interesting)

      by Shinobi ( 19308 ) on Thursday January 08, 2015 @11:57AM (#48765607)

      Nono, like other big IT projects in the UK, it will be using "the very latest in Agile know-how", and cost 3 times as much as any clusterfuck that involves Oracle, take 50% longer, and spread 300% more blame on "old fossiles"....

      Disclaimer: Had to interface with a EU project under UK IT auspices last year.... Painful....

    • Re:old != bad (Score:4, Insightful)

      by ranton ( 36917 ) on Thursday January 08, 2015 @12:01PM (#48765657)

      Just because something is old, doesn't mean it needs replaced. In short, why not just upgrade the mainframe?

      I have no idea how common VME developers are, but when dealing with legacy systems you do have to worry about being able to find qualified people to work on your software. Not only are the skills rare, but most people are going to be wary about pigeon-holing their career by focusing on such a obscure system. You will either have to rely on sub-par employees or pay well over market rates.

      Hiring expensive employees / consultants may still be desirable over a risky migration, but the expense (either in salary or in low quality employees) shouldn't be ignored.

      • I have no idea how common VME developers are, but when dealing with legacy systems you do have to worry about being able to find qualified people to work on your software. Not only are the skills rare, but most people are going to be wary about pigeon-holing their career by focusing on such a obscure system. You will either have to rely on sub-par employees or pay well over market rates.

        These days, I'd take more job security over "over market rate" salary and fancy perks. As long as I can pay my bills, the work is fairly interesting, my teammates and managers actually appreciate me and value my skill set - and I'm not micro-managed to death, I'd be happy to pick up some VME skills.

        I don't consider myself a "low quality" employee, I just don't have fancy tastes or a tech-toy habit to feed (nor do I have kids). Now 51, I have always lived responsibly, am debt-free (with enough savings/in

  • Does it still work? (Score:3, Interesting)

    by 91degrees ( 207121 ) on Thursday January 08, 2015 @12:03PM (#48765673) Journal
    If so, why fix it? What are the tangible benefite of a new system?
    • Government contracting money.

    • Depends what the requirements are.

      Usually, this sort of thing happens because requirements are changing faster than the old system can be maintained to keep up.

      I wouldn't be surprised if this is to help automate the swingeing series of "sanctions" that are carried out to remove the benefits from job seekers in this country.

      Things like suspending their payments for...

      * Being late for an appointment at the job centre (by approx 2 minutes).... because they were attending a job interview
      * Not attending a job in

    • What are the tangible benefite of a new system?

      Keep in mind that all of these are only possibilities:
      1. Reduced operating expenses. Modern computers are much more power efficient than old ones
      2. Faster response time. If you keep the visual wizz-bangs to a minimum a modern system should be able to serve up a search faster
      3. Cheaper hardware replacement, edging towards 'actually able to replace it'. Remember NASA hitting garage sales up for old parts? The old hardware tended to be very robust, but it still fails on occasion, and it's not made anymo

      • by plopez ( 54068 )

        1) Bogus, increases in hardware capabilities has always been gobbled up by bloated software.
        2) Yeah..... right. The first impulse will be to slap on a shiney new GUI. And the monkeys hired to code it will probably use bubble sorts or worse code.
        3) Emulators
        4) How about 'growing your own'? Remember, the hard part is not learning to program, that can be done in a couple of years. The hard part is understanding the business rules.
        5) If you use an emulator you have plenty of hardware space, though you might hav

        • 1. False. Frequently true, but not 'always'. I've seen rooms freed up in exchange for two cube shaped servers under a desk, with the operators praising how fast things now were...
          2. See initial disclaimer (only possibilities) and I put the visual wiz-bang specification in there for a reason.
          3. You need somebody to program the emulator. They aren't always available.
          4. I actually mentioned 'grow your own' - it can get really expensive, because new people don't want to be locked into your legacy system

    • by quetwo ( 1203948 )

      Ability to find parts on for the aging system is probably becoming more and more difficult. Regardless of the software platform, the OS is aged enough where certain parts are bound to be harder and harder to obtain should they need a replacement.

      There is also something to be said about reviewing all the business rules and updating them to meet their current needs. That usually happens during a revamp of the system.

  • by www.sorehands.com ( 142825 ) on Thursday January 08, 2015 @12:05PM (#48765677) Homepage

    Many people are shocked that computers/systems for 20 years still run, but is says a few things:

    1. That people are used to crap code that can't keep running.
    2. That people are used to crap products that can't last for more than a couple of years.

    If it ain't broke, why fix it? They sent man to the moon on less CPU horsepower than my Nexus 6. Voyager has been running for more than 35 years in the harshness of space.

    • by LWATCDR ( 28044 )

      News Flash.
      The US's ICBMs are from the 1960s and the US still uses tankers and strategic bombers from the 1950s.
      Good stuff lasts.

      • The requirements in those fields don't change.

        "Drop the bomb on the target" is a problem defined by the laws of physics. I've seen artillery pieces with old brass analog computers that still work perfectly.

        "Make a system that automates the processing of the asinine new rules for Job Seekers Allowance" is a moving target.

        • by LWATCDR ( 28044 )

          But the Job of the OS has not. VMS is actually a great OS for this kind of system. Frankly it is a real shame that is in the Hands of HP today.

      • The US's ICBMs are from the 1960s and the US still uses tankers and strategic bombers from the 1950s.

        B52s have been rebuild and upgraded and refurbished so many times they may as well be the Ship of Theseus [wikipedia.org]. Furthermore the munitions they carry aren't really the same either these days in most cases.

      • The US's ICBMs are from the 1960s and the US still uses tankers and strategic bombers from the 1950s.

        1. They still have a lot of upgrades since then
        2. We're running up to some timelines where we're going to have to spend a lot of money to replace them, especially the ICBMs, because they just can't be extended anymore and the equipment to manufacture replacements no longer exists. More importantly, the skills to make replacements using the old techniques no longer exists in many cases.

    • If it ain't broke, why fix it?

      Code churn. Can't make contracting money by not rewriting things all the time.

    • Not really. We're not surprised that systems from 40 years ago are still able to do the things that they were designed to do, we're surprised that the requirements haven't morphed beyond all recognition in this time. If you spend three years developing a software system, and at the end of those three years your requirements look even remotely similar to the ones you started with, then these days you consider yourself very lucky. The idea that you could deploy a system and 40 years later your customer wou

      • by Todd Knarr ( 15451 ) on Thursday January 08, 2015 @12:41PM (#48766093) Homepage

        They have. But they didn't do it overnight, they did it small bits at a time and those 40-year-old systems were patched or updated and debugged with each change. The result is a twisted nightmare of code that works but nobody really understands why and how anymore. And the documentation on the requirements changes is woefully incomplete because much of it's been lost over the years (or was never created because it was an emergency change at the last minute and everybody knew what the change was supposed to be, and afterwards there were too many new projects to allow going back and documenting things properly) or inaccurate because of changes during implementation that weren't reflected in updated documentation. As long as you just have to make minor changes to the system, you can keep maintaining the old code without too much trouble. Your programmers hate it, but they can make things work. Recreating the functionality, OTOH, is an almost impossible task due to the nigh-impossibility of writing a complete set of requirements and specifications. Usually the final fatal blow is that management doesn't grasp just how big the problem really is, they mistakenly believe all this stuff is documented clearly somewhere and it's just a matter of implementing it.

    • They sent man to the moon on less CPU horsepower than my Nexus 6.

      I wouldn't be too sure of that. While the Apollo guidance computer didn't have much horsepower, it didn't *need* much horsepower... it was mostly a crude control system that performed only very basic calculations. All of the heavy number crunching was done by multiple mainframes on the ground and the results uploaded to the vehicle. Or, to put it another way... the CSM and LM computers were basically peripherals.

      If it ain't broke,

      • In the Apollo timeframe, a "supercomputer" would be a CDC 6600 (1964).

        3 MFLOPS and up to 10 million instructions per second, 60 bit word size, 262144 words of main memory (~3 million 6 bit characters) -- yes, your smartphone is more powerful. This was STILL the most powerful mainframe in 1969.

    • by sjbe ( 173966 ) on Thursday January 08, 2015 @12:32PM (#48765959)

      Many people are shocked that computers/systems for 20 years still run,

      Only people who don't know much. It's not shocking that such a thing would happen or that hardware can be made that robust. What IS shocking is that people put systems in place without any thought whatsoever to what people might want to do 20 years later. Seriously do you REALLY think it will be efficient or practical without problems for you to use the PC you are reading this on today in 20 years? Why would it be any different for a business or government?

      If it ain't broke, why fix it?

      Because it probably IS broken in a multitude of ways. Just because it can get a specific job done doesn't mean it does so efficiently or without problems. I've driven a lot of beater automobiles over the years and while they usually got me from point A to point B they were unquestionably broken if a number of ways. I have PCs that are 10-15 years old here in my company doing specific jobs and they definitely have problems. Yes we still get some productive work out of them but that doesn't mean I shouldn't think about replacing them when I can.

      They sent man to the moon on less CPU horsepower than my Nexus 6.

      Because that is all they had at the time. Nobody would even dream of doing that way today because we have better options now. Why limit yourself to yesterday's technology if you have a choice?

      Voyager has been running for more than 35 years in the harshness of space.

      Which is relevant how? You're comparing a spacecraft that human eyes will never see again with a earthbound computer system that we can modify or replace any time we want.

  • Besides, banks are even worse. They're still running virtual COBOL card systems in their basements.

    • I did a one-night job in 2005 to convert a token ring network into an Ethernet network at a Wall Street branch office in Silicon Valley. First and last time that I ever saw a token ring network in the wilds.
      • by Fished ( 574624 )

        Listen up, Junior ...

        In some ways, Token Ring was very much superior to Ethernet. A hospital I worked for in the late 90's had a huge (1000 nodes) 4Mbps TR, all as one big subnet, built long before switches came along. If you tried to do that with Ethernet, it would have crashed and burned in a week. This was, on the whole, pretty reliable (if slow). The downside was that if one card in the ring failed, the whole thing would generally die. So it was great until the 10 year old TR cards started failing

        • Junior?! That's funny. I was the most experienced tech for that one night job. The contracting company hired two fresh out of high school students who thought they were hot stuff for unboxing a Dell computer without looking at the unboxing diagram first. The job was simple: removed the TR cable, plugged in Ethernet cable, and test video app for 300 computers. These two jokers plugged the Ethernet cable into the TR port instead of the motherboard port, and didn't catch their mistake because they didn't test
        • The downside was that if one card in the ring failed, the whole thing would generally die.

          As I recall, that "downside" pretty much single-handedly killed the technology. It's a big deal.

  • by Anonymous Coward

    John Titor... Please report to... Oh, nevermind. Some bloke in a blue callbox has already claimed it.

  • I bet this is one of the systems that will have to be replaced to fix the 2038 bug. There are a lot more 32 bit UNIX systems out there.
  • Orange Leos? (Score:5, Interesting)

    by sysjkb ( 574960 ) on Thursday January 08, 2015 @12:23PM (#48765863) Homepage

    I wonder if they are running "orange Leos"? Here's a post from alt.folklore.computers in 1998. Terribly impressive. I'm not sure his age estimate is necessarily accurate, though: the final incarnation of the Leo ceased to be manufactured in the latter half of the 60s, so it may be a bit younger.

    From: Deryk Barker (dbarker@camosun.bc.nospam.ca)
    Subject: Re: Multics
    Newsgroups: alt.folklore.computers, alt.os.multics
    Date: 1998/11/09

    [...]

    When my wife was working for Honeywell, in the 1980s, one of the
    customers she had dealings with was British Telecom.

    BT, at one location, had what they called the "orange Leos".

    Now, for those who don't know this, the LEO was the world's first-ever
    commercially-oriented machine (1951). Even more amazingly, the Lyons
    Electronic Office was designed and built by the J Lyons company,
    best-known as manufacturers of cakes and for their nationwide chain of
    corner tea shops.

    Anyway, an "orange Leo" was an ICL 2900 mainframe (they came in orange
    cabinets), emulating an ICL 1900 mainframe, emulating a GEC System 4
    mainframe emulating a LEO.

    30+ year old executable code over 3 architecture changes....

    • I miss the days when we had distinctive designs in the data center. Big blue mainframes, orange and blue DEC 20s and 10s.. Now it's all racks and the only blinking lights are on the switches :-(

  • by ErichTheRed ( 39327 ) on Thursday January 08, 2015 @12:37PM (#48766027)

    Not all legacy stuff is bad. Not all legacy stuff should be kept around to the point where you can't find people to run it, however,

    I've had experience working in die-hard IBM mainframe shops as well as places that used the HP MPE operating system on the HP 3000 minicomputer system. In the 3000 case, the customer was relying on a service provider that was providing an application that was way way way out of date but still worked. All the IBM places I've ever worked have been slowly "modernizing" their application stack, but in most cases, the core transaction processing has remained on the mainframe because that was the best solution. It's extremely rare these days to see an end user facing green screen application, but they do exist as well. (Yes, I work in "boring" old school industry sectors, very few web-framework-du-jour hipsters here, but we're also not old farts.)

    The problem I've seen is that vendors love the fact that customers are locked in and will do nothing to encourage them to get off. Most ancient mainframe code can run virtually unmodified on newer hardware, and that backwards compatibility is a big selling point. It allows IBM to go in, swap out your entire hardware platform at $x million, and keep billing you by the MIPS without changing any code.

    But...the reverse problem is that "mainframe migration" projects often end up becoming case studies of how Big Consulting Company X was paid hundreds of millions to not deliver a working system. I believe I read about DWP's "Universal Credit" project that has Accenture, IBM or Oracle written all over it. These kinds of projects usually try to port all the business logic and transaction processing to some horrible-to-maintain J2EE monstrosity backed by an Oracle database. They usually fail because (a) no one correctly estimates the work required to pull all that business logic out of 30+ years of cruft, and (b) the consulting companies replace their star team (that travels with the sales force) with new grads in India (who do the actual work.) I've seen this cycle over and over again, and am still amazed that CIOs aren't wary of consultants.

  • I assume these applications are not running on the original hardware. They should still be working fine on current Fujitsu mainframes. There may be a valid reason to rewrite part or all of the applications because additional functionality is needed but, too often, money is wasted replacing systems (especially mainframe systems) that still meet most of the enterprise's needs, Often, "more flexible reporting" is used as an excuse for hugely expensive rewrites, when a periodic data extract into a separate data
  • And we are running an IBM Mainframe. Yes, it's been upgraded to zOS, but it's fully OS360 compatible and I regularly review Cobol code with comments going back to 1980. There's been a push from above to use the Control_M "GUI" interface, but a lot of the folks here are resistant, since we have faster and better control via the terminal (sorta like GUI versus Command Line).

    And yes, my Windows workstation is simply a glorified terminal as I spend all day logged in to the mainframe itself (green screen apps).

  • VME is unfortunately on my CV. What I'm amazed is that they can still get parts for the damn thing.

  • If it's not broken, it gets the job done, it's vendor-supported, and you don't expect that support to end in the foreseeable future, then I don't see the problem.

    Age alone is not any reason to declare technology obsolete.

    Here's a common example: Stores still sell 4-function calculators for $5 or less. As far as the user is concerned, they are less-expensive versions of the same calculators you could buy from the mid-1980s on, and thinner-and-cheaper-with-LCD-and-button-battery versions of the kind you cou

  • by funwithBSD ( 245349 ) on Thursday January 08, 2015 @03:56PM (#48768471)

    The California DMV has them beat, they are still using code installed on UNISYS mainframes in 1970 to run the DMV core applications.

    It is as old as I am...

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...