This monthmarks the end of an era in NASA computing. Marshall Space Flight Center powered down NASA’s last mainframe, the IBMZ9 Mainframe. For my millennial readers,I suppose that I should define what a mainframe is. Well, that’s easier said than done, but heregoes — It’s a big computer that is known for being reliable, highly available,secure, and powerful. They are bestsuited for applications that are more transaction oriented and require a lot ofinput/output – that is, writing or reading from data storage devices.
They’rereally not so bad honestly, and they have their place. Things like virtual machines, hypervisors,thin clients, and swapping are all old hat to the mainframe generation thoughthey are new to the current generation of cyber youths.
In my first stint at NASA, I was at NASA’sGoddard Space Flight Center as a mainframe systems programmer when it was stillcool. That IBM 360-95 was used tosolve complex computational problems for space flight. Backthen, I comfortably navigated the world of IBM 360 Assembler language and stillremember the much-coveted “green card” that had all the pearls of informationabout machine code. Back then, realsystems programmers did hexadecimal arithmetic – today, “there’s an app for it!”
//LCURETONJOB (NASA,CIO)
Interesting news, but it just begs the question of *why* you’ve got rid of your mainframes, and what did you replace them with?
Good night, sweet prince.
Great read. I think many of us would love to hear why they were shutdown and what NASA transitioned to. Being a Gen X’er and a Software Engineer who has spent the better part of the last decade working on a web-application that attempts to bridge the old and new worlds, I’ve been lucky enough to experience the pros and cons of both. The recent NoSQL buzz also seems to have increased interest in old mainframes. Their reliability and availability are something to envy still in this supposed age of enlightened computing.
Code is God
Reply to J. R. Hacker,
We only kept the mainframe around to support applications that we knew would soon be retired. In that case, it was more cost-effective to keep the as-is architecture in place rather than migrate to a server environment. When we were in the position to retire the applications, retiring the mainframe made sense.
There had been no new application development on the mainframe here for a while. Our larger business applications run on SAP in a non-mainframe environment. The retirement also realized cost savings in software licenses.
I was amused to read about your “first stint” at NASA. I was one of the high school students who, as part of a summer program in 1969, got to program the IBM 360/95 at Goddard in Manhattan. That summer, it was the fastest machine in the world, and in those days, young people rarely got access to computers at all. (Then again, it was a time when “the powers that be” were pretty spooked by Sputnik, so opportunities for students in the sciences were starting to open up.) Anyway, it was a wonderful experience, and I went on to a career in software that continues to this day. It’s in some ways sad to see the last of the NASA mainframes turned off, but always nice to hear of someone else with fond memories of the ’95.
What is going to happen to the old mainframe? Y’know, ’cause, if you guys don’t need it anymore, I could use a new computer!
//GO.SYSIN DD *
BTW, IT’S UNCLEAR WHY THEY HAVE PURCHAZED THAT (NEW) Z9 BOX
JUST TO SWITCH IT DOWN IN A FEW YEARS?
/*
//
I worked on the sister machine of that NASA System/360 at the University of Michigan in the late 1960s. These days, people are puzzled when I say that I can remember that walking into a room full of programmers and asking “Does anyone have a green card?” did not send people scrambling under desks.
What *I’m* curious about was this: was it evaluated whether it was cost effective to keep it running and use it to virtualize a passel (that’s a technical term) of Linux workloads?
If so, and it wasn’t, those numbers would be ver interesting to see as well.
My father worked for IBM, late 1950’s to mid 1980’s and I reminber the 360 main frame a great machine for its time, helped put the USA no the moon. Main frames are still found in many high power areas.
I like the JCL job description 😉
Like Jonathan and J.R.Hacker, I too would like to know what has replaced the mainframe. When I was an engineering student at Purdue in the ’70’s, we had a CDC6500 as our mainframe. And, programed it with punch cards.
Progress is a wonderful thing!
Beth
More answers:
The Z9 was purchased in 2004 — which is a geriatric machine by today’s standards. Back then, three physical mainframes were consolidated into LPARS (if you have to ask what that means, it doesn’t matter — anyway logical partitions). Those three systems were purchased in 1999.
Why let its abilities go to waste?
I feel sure you could get it an account for einstein(at)home (einstein dot phys dot uwm dot edu) and let it use BOINC (boinc dot berkeley dot edu/) to crunch numbers for that ‘World Year of Physics 2005 and International Year of Astronomy 2009 project supported by the American Physical Society (APS) and by a number of international organizations’. Although there are a lot of other worthy projects you could let it crunch numbers for )as visible at: boinc dot berkeley dot edu/wiki/Project_list ) einstein(at)home is discovering pulsars and black holes, etc.
Why let its abilities go to waste?
I feel sure you could get it an account for einstein(at)home (einstein dot phys dot uwm dot edu) and let it use BOINC (boinc dot berkeley dot edu/) to crunch numbers for that ‘World Year of Physics 2005 and International Year of Astronomy 2009 project supported by the American Physical Society (APS) and by a number of international organizations’. Although there are a lot of other worthy projects you could let it crunch numbers for )as visible at: boinc dot berkeley dot edu/wiki/Project_list ) einstein(at)home is discovering pulsars and black holes, etc.
What will they be replacing these systems with?
These IBM Z9’s are still in use in many industries. I have one sitting “under my feet” at work in the data centre downstairs. I was an ICL Mainframe Sysprog in the 1980’s and 90’s. ICL was a British computer manufacturer and their VME Operating system was a marvel. Once ran a S39 processor for 17 months without needing to shutdown and restart. Reliability personified. I miss these “beasts”, they just did what you wanted and were very configurable to do what you needed.
Ah, yes, the mainframe.
I remember having to code “apps” (software back then) that would fit into its 4K memory pages. If not, then you needed the elegant finesse of handling jumping between pages.
I remember having to buy punch cards in a vending machine. Though, as today, college students had to watch every penny, and therefore, you had to learn to type very accurately so that you didn’t waste even one of those paper cards. Youngsters, let me translate that to mean no backspace key.
Used z9’s are so cheap – my company has outsourced mainframe operations. We get charged by the mip. Even though the z9’s do not have ‘engines’ which can be removed as your resource requirements go down, we can buy a smaller one and reduce our bill that way. Purchase price is less than the monthly labor bill from the sourcer!
There are two real problems: everybody who knows how to run these things is getting ready to retire and the software vendors are out for blood from the people still dependent on them. They are determined to kill the goose and they are succeeding. Way to go, CA!
A machine is a code and a being is inmaterial code
What happens without ???
Enormous account in a bag of Time entirely = Love ???
Ah, the ‘green card’ – from the 60’s…surprisingly, I still use it today…Assembler is still in constant use as well as COBOL…
If mainframes are no longer needed, then what is NASA using when large computational force is needed?
Just curious…when was the last 1100/2200 Sperry/Unisys type mainframe shut down at NASA? NASA used lots of those types of systems as well as IBM brand mainframes.
What’s an equivalent to the computing “power” in a current computer? Is it like a really fast iMac? or is it really ten times that?
Steve
Looks like some people just have to learn the hard way. GOOD LUCK!!
I still think the MAINFRAME is the best option for corporate and large processing organizations.
With what are you replacing it?
Despite the pity of switching of the last mainframe, NASA is doing the right move when turning to more sustainable computing models like Cloud Computing.
Sorry to add the following URL, but I think you’ll find the following article interesting: http://www.hpcinthecloud.com/hpccloud/2011-02-03/mars_as_a_service_cloud_computing_for_the_red_planet_exploration_era.html
Someone is asking how fast are mainframes. To give you a sense of perspective, in the early 1990’s, I once wrote a script in using a utility called DFSORT (whose most recent documentation was published in 1972) than ran on an IBM machine. It was all batch processing, meaning a user would submit his/her job to the system queue. My little script did a complex sort on a list over a million records long. And under 0.25 seconds. That’s how fast.
Silly kids. This is not a big deal, except for historical fact that it had to happen eventually. It’s no different than the last steam-powered ship being scrapped.
The workload previously handled by mainframes has shifted to software that creates massive “virtual computers” out of the spare compute cycles on the desktop and laptop computers that are laying around on the internet.
Here’s how it works: you install a little software adapter, and it joins the pool of CPUs available to do work anytime your computer is connected to the net and doesn’t have anything else to do (i.e., when your screensaver kicks in). At the other end, the “main” part of the program runs on someone else’s computer (laptop, desktop – doesn’t matter). The programs are written in a way that breaks the problem down into bite-sized chunks, and the chunks are doled out to CPUs in the pool as needed. If your computer is in the pool, then it picks up a chunk of work (downloads the data and a chunk of executable code), does the work, and returns the results when it’s done. (FYI: the code runs in a walled-off area of your computer, so that it can’t hurt your computer or rummage through your files.)
All of this means that you can create programs that automatically build a huge “virtual computer” out of the idle CPUs scattered around the world. (There are a LOT of them, believe me.) The main program doesn’t really have to know anything about the other computers. It just says, “Hey, when you get a chance, can one of you do this?”, and eventually it gets done.
This kind of system goes by a number of different names – “cloud computing” and “grid computing” being the most well-known. The basic idea behind it all is that your “computer” expands to match the size of the job, and then shrinks back down when you’re done. Boil it all down, and it means you don’t need a big ultra-fast mainframe “supercomputer” anymore. You can just borrow one for a day, and then give it back when you’re done.
Cool, huh?
Regards,
Ken Scott
Digital Chaotics
I hope you’ve convert the old data to a new format….
I was a system programmer on the IBM 360/65 and IBM 360/85 running ASP, or LASP at Bell Telephone Labs. I also worked on the BE90 emulator that emulated the IBM 7094 on the /65. I remember teaching JCL and complaining that it was the kludgiest and inhuman language on the face of the earth. Yes, programming in ASM-G and ASM-H was fun, as was PL/1, Fortran G, Fortran H, Snobol IV, and other mainframe languages. That was fun, much fun. Today, the operating systems are lost in a “cloud” and are certainly restricted proprietary programs.
Memory space, which was so dearly conserved in the old days is lavishly available today. Ah, the old days. When we finally got rid of the 7094 at Holmdel, it went out singing Auld Lang Syne from a low quality speaker connected to the CPU. It’s memory space was 32K. Oh my!
H
When you get rid of your old mainframes, you replace them with large scalable clusters of dirt-cheap IA/AMD64/PowerPC multi-core nodes running a Linux distro. You can run huge MPI jobs on such clusters, but of course data transfer between the nodes is a bottleneck. More adventurous among computing groups would also use GPU acceleration on individual nodes (works really fast but results in lower code maintainability)
If some hardware vendor could come up with a truly symmetric SMP mainframe with as many cores (and as large RAM) as in a small cluster, and equally fast access to the shared memory from ALL the cores, then mainframes would become popular again. Coding with OMP/threads/etc on large SMP machines is so much more convenient than the messy MPI (or equivalent networked parallelisation technologies)
However, price/efficiency ratio is currently in favour of clusters of cheap nodes (some would even argue — GPU-accelerated nodes) Converting you codes to MPI is a relatively small price to pay for a potentially unlimited scalability of computing clusters.
The mainframe may be gone, but NASA has lots of computing power in computer clusters. They are using more than the ‘cloud’ and virtual machines.
Such a sad day for NASA and space exploration. From John Glenn to Neil Armstrong and the Shuttle missions, there has never been a more reliable and available technology invented by mankind. Although I take issue with your comments concerning end-user interfaces the mainframe continues to evolve into the 21st Century and I find it disconcerting to think that NASA can not continue to use these systems for future missions. Godspeed and God Bless those who invented HASP and have made contributions to enhance the mainframe environment during the last 50 years!
Progress is bittersweet! The information contained within those miles of magnetic domains has historical and scientific relevance. I would hope that the information processed by those classic machines has been transferred to a medium that can be processed and utilized by contemporary programmers and hardware.
So, how many years was it in the planning to convert to other computer languages that could run on servers?
DEF STOR 8M
IPL CMS
Just some of the things I can forget now…..
Howie Weiss said: “When we finally got rid of the 7094 at Holmdel, it went out singing Auld Lang Syne from a low quality speaker “
I would have expected “Daisy Bell” (Bicycle Built for Two)
If NASA is becoming more reliant upon cloud computing, provided by a commercial outside provider, they are doing something stupid. Nasa should keep all of it’s computing power in-house, so they are the only ones that are responsible. Why do we assume that the internet will be so reliable in the future?
Wonder if those computers are available for sale to smaller companies after the relevant data is removed? I know lots of companies that could use the servers for their smaller operations and this would be so much faster, than many who try to use a small PC as a server and cannot afford the latest technology from Microsoft.
what are they using to compute the information now??
Thank you for sharing an insightful post on this significant milestone for NASA. I especially like your reference to the concepts of today being old hat to the mainframe generation. I view this as a new landing for NASA in its Transformation Journey as I outline in my post here: http://bit.ly/yFP0Dj
I cannot see how a mainframe code base is easier to maintain than a C++ based one using STL (or NVIDIA’s CUDA – these are GPU options). C++ is far more powerful than many mainframe options as a development language and GPUs today CRUSH ANY mainframe on the market.
Seriously, $300 at Best Buy gets you 1.1 Teraflops (GeForce 560Ti). You can get equal uptime from any number of other solutions now including SANs tied to off the shelf x86/x64 based servers. Batch processing is going the way of the Dodo anyway. It’s easier to model systems in their natural single transactional state.
I thought it was cool seeing the core memory in an old PDP-11. It doesn’t mean I think we should still use it.
I started with IBM in the Mainframe era. I remember reading about all the “really innovative” things that VMware was able to do with their hypervisor several years ago. I remember thinking “what’s so great about this? We have been able to do this with PR/SM for 20 years!” I guess the more things change….
Big Blue is doing a roaring trade in mainframes and IBM’s Mainframe profit is on increase. Wish the NASA would have thought about Mainframe Modernization to the zEnterprise, zBX stuff.
“Why do we assume that the internet will be so reliable in the future?”
Your home Internet connection might not be reliable. That’s the experience most moms and dads have. But in the locations where NASA is set up, mercy snakes alive we got us BILLIONS to spend on backbones. Plus the protocols have been reliable since day one. It was all specifically designed that way. (Hello?) Anyway, with computing it’s all about clusters right now, to say the very least….
The points explained in the post are clear and all are proving what you have written in these post.. My point is that i want to invest in this technology for my business ; However; since I have read some stuff about about and security, i become somewhat septic to invest in, so to what extent it is safe for data storing and which are the best reliable companies affording quality services . For cloud computing will become the next trend generation for business management.
his is very interesting i think that they didn’t need to shut down nasa
thank u for listening