Inside the LEO Doghouse: Nuclear Thermal Engines

Perhaps it’s a Midwestern thing, I don’t know.  I grew up outside Chicago (although my family is mostly all from back east) so that’s where my theory originates regarding it being a Midwestern thing.  After all, when I was growing up, nobody around me seem to think that it was odd so my assumption is that they all – we all – pronounced it this same way:  “new-que-lar.”  It was only later, when removed from the wayward influences of my isolated rustic upbringing was it pointed out to me – sometimes amidst unsuppressed laughter – that the word is spelled “n-u-c-l-e-a-r”.  Or, in other words, it is pronounced: “new-clear.”  Okay, so, whatever.

nuclearplantIf I’m talking about ‘nuclear this’ or ‘nuclear that,’ what’s shown in the picture above is probably what popped into your head.  And that’s fair.  This is the way that nuclear power most commonly impacts our daily existence, i.e., through the light switches and electrical outlets in our houses that are ultimately traceable back to a power plant, some of which are based on nuclear fission reactors.  Below are a couple of other applications of nuclear power with which we are familiar.

shipsOkay, so what does this have to do with rockets?  Well, there are ways to use nuclear power to create rocket propulsion.  And, by the way, this is not some newfangled idea out of the blue.  Did you know that one of the original plans for the third stage of what would become the Saturn V rocket was for that stage to use nuclear-thermal propulsion?  That plan was eventually dropped and a configuration using a J-2 engine was chosen instead, but going all of the way back to the late 1950’s people were thinking of ways to use the extraordinary power of nuclear fission to enable and enhance space exploration.

There are two basic classes of rockets that use nuclear fission.  One is called “nuclear-electric” and the other is called “nuclear-thermal.”  In a nuclear-electric rocket, you use the reactor to generate electricity (like a small power plant) and then use that electricity to make high-velocity ions.  The latter portion of the sequence is called “ion propulsion” and there are different schemes and ideas out there, some of which have been used on unmanned spacecraft in the past using other sources for electrical power.


Nuclear-electric propulsion is extremely efficient.  In the past we’ve talked about specific impulse being a measure of rocket efficiency.  Well, a nuclear-electric propulsion system is on the order of ten or twenty times more efficient than your typical high-performance liquid hydrogen / liquid oxygen chemical propulsion rocket such as J-2X or RS-25.  BUT (and this is a really, really big “but”), for all that efficiency, they don’t generate much thrust.  AND they are very heavy.  Thus, the only place where using nuclear-electric propulsion makes any sense is in space.  Even there in “weightless” space, the extremely low thrust-to-weight ratio means that this propulsion system is only appropriate for missions where you’re willing to be very patient and get to wherever you’re going quite slowly.  That’s not really an appropriate approach for missions with humans on board.

The other class of nuclear power rocket engines, and the one that I really want to tell you about, is nuclear-thermal rockets.  It is appropriate that we discuss nuclear-thermal rockets in an article immediately following an article discussing expander cycle engines since they are actually closely related.  Almost cousins.  Below is a schematic for a nuclear-thermal rocket in the same general format as the various expander cycle engines were shown in the previous article.


What you don’t have here is any oxidizer.  Why?  Because there is no combustion.  In a normal rocket engine we use fuel and oxidizer in a chemical reaction to create hot combustion products.  It is the ejection of those hot combustion products generate the engine thrust.  For a nuclear-thermal rocket engine we use the reactor to make the hot stuff.  You can think of the reactor, when operating, as a really, really powerful heat source, even more powerful than a chemical reaction.  Thus, I can use that heat source to generate turbine drive gas, just as in an expander cycle engine, and I can also use that heat source to make the hot gas that generates the engine thrust.  In terms of configuration, the reactor has built into it flow passages where the fuel picks up heat as it goes along.  These passages can be along the outside, which I’ve shown here as feeding the turbine, and they are throughout the innards of the core.  There are different ways of accomplishing this.  One way is to make extrude the core rods with passages – “coolant channels” – through the length of the rods.  This is shown in an old sketch from the NASA archives below.  Another way to achieve this is to make the core out of pellets or “pebbles” trapped in little cages.  Doing this, you’d get what’s called a “pebble-bed reactor” and such a configuration provides for lots and lots of heat exchange surface area between the core pellets and the working fluid flowing through.


So, what’s the “fuel” in the rocket schematic, i.e., the working fluid shown in red?  The typical answer is hydrogen.  One of the reasons that we use hydrogen in a chemical engine is because when we run fuel-rich, we get lots of hot, unreacted hydrogen as part of the exhaust.  Hydrogen is very light.  When it gets hot and energetic – and hydrogen picks up heat wonderfully – it moves very fast.  If you think back to the rocket equation, fast moving exhaust means high performance.  In this case for the nuclear-thermal rocket, the exhaust is pure hydrogen, so performance can be quite high.  How high?  Well, it’s not as high as the nuclear-electric options discussed above, but specific impulse values two times that of J-2X or RS-25 are entirely plausible.  Further, despite the fact that nuclear-thermal engines are quite heavy, their thrust-to-weight ratio is generally much better than the nuclear-electric options.  In other words, a nuclear-thermal engine has some good “oomph,” enough oomph to make it potentially usable for human spaceflight.  And that’s why it was seriously contemplated in the earliest planning for the mission to the moon over fifty years ago.  That’s also why, in my humble opinion, it is a prime candidate for any future human mission to Mars.

As mention above, this is not a new idea.  It reaches all of the way back to the 1950’s.  There was a series of active programs all throughout the 1960’s falling under the general heading of NERVA, Nuclear Engine for Rocket Vehicle Application.  Below is a picture of an actual test of one of these engines.


So, with all this history behind us and with all this potential for performance, why on earth haven’t we been pursuing this technology first and foremost?  Because, well, nothing is free and nothing is ever as easy as it seems at first.

The biggest struggle with nuclear-thermal rockets is that whole radiation thing.  Okay, yes, I said it.  Radiation is bad.  Deadly.  And very long-lasting.  While rocket engines of any type always pack a punch in terms of power density and, therefore, the possibility for catastrophe, with the added spice of radiation, you’ve got quite the potential for a noxious stew.  Does this mean that we ought to simply avoid it altogether?  That’s a valid question and one that’s been debated for about 50 years.  It would be presumptuous of me to suggest that I could resolve the issue definitively, but we can discuss the constituent elements rather than just falling back on the “radiation is scary” answer.


First, let’s talk about whether it could be used on a vehicle.  The reactor is going to generate radiation.  Internally, that’s how it works and that radiation in different forms overflows the boundaries of the reactor.  It just does.  So, what do you do?  Well, you provide shielding.  The truth is that space is chock full of radiation.  If not for our little pocket of safety thanks to the magnetic poles of planet earth, we’d be cooked to the crisp by the radiation pouring out of the sun.  When you’re in space, particularly if you’re going beyond our little planetary pocket of safety and traveling to the moon or to Mars, you’re going to get bombarded by radiation so no matter what, shielding is necessary.  Shielding is heavy because in order for it to be effective, you need big, heavy molecules to catch gamma rays (my very simplistic explanation).  Lead and tungsten are two common shielding materials for this purpose.  With a fission reactor, you are also going need something for neutron flux moderation.  The typical material for this is Lithium Hydride but the propellant tank itself containing hydrogen also works well for this.

A means for minimizing the weight impact for the shielding used to protect the astronauts from the reactor radiation is to use the notion of a shadow.  In the sketch below, you have a reactor on the back end of the vehicle, a shield in between, and the spacecraft up front.  Between, connecting everything and not shown, would be the propellant tank and the usual shiny structural trusses.  As you can see, the shielding creates a shadow from the radiation within which the spacecraft sits.  Now, it’s not always this simple because you sometimes need holes through the shield for functional reasons or you could get reflected/scattered radiation effects from structural elements, but this is the most common general scheme for dealing with a reactor on a spacecraft.  Stick the reactor out a ways from everything, place the shield close, and cast a long, broad shadow.


Okay, you say, you’ve protected the astronauts, great.  But what about the six or seven billion people back here on planet earth?  After all, in order to fire up a nuclear-thermal rocket in space you first have to get it into space and that means that you have to launch it from the surface of the planet.  Launch always involves risk.  What happens if the launch vehicle blows up?  If the launch vehicle blows up, then the reactor blows up.  Wow.  Now, how dangerous is that?  I will not pretend that I can answer that question with my limited background.  But I can tell you that prior to and during launch, the reactor is “cold.”  While you probably wouldn’t want to use enriched uranium to make wallpaper for your house, it’s not that horrifically dangerous prior to use in an active reactor.  It is only after the reactor gets going that the innards get all juiced up and seriously radioactive.  The plan would be to launch the reactor never having been “juiced up” and only start it when it is at a safe distance from earth thereby eliminating as much as possible the potential of reentry of a hot, radioactive reactor into the atmosphere.

[Note that the fact that you need something like tungsten for a shield (very heavy metal) and you’ve got bundled up uranium in your reactor (another even heavier metal) are big reason as to why a nuclear-thermal rocket engine is typically so heavy as compared to a chemical rocket engine.]

The next issue to deal with for a nuclear-thermal rocket is probably one of the most difficult: testing.  On the one hand, we’ve got lots of places where we can test rockets.  On the other hand, we have certain places where we test reactors (mostly under the expert supervision of the Department of Energy in coordination with the U.S. Navy).  But putting those two pieces together and playing with them as a unit, now that’s really tough.  Why?  Because of that darn radiation thing again.

After a typical J-2X or RS-25 test, after we’ve cleared residual propellants and bled away any excessive pressures, we’ve got technicians all over those engines.  They’re inspecting this, examining that, taking things apart, putting them back together.  The whole point of a development program is to get data and a lot of that data comes in the form of post-test inspections.  With a nuclear-thermal rocket, that wouldn’t be possible unless you really, really didn’t like your techs (please note that’s not serious, just a joke in poor taste).  Once the reactor has been fired up, it’s hot.  Yes, you can dial it back down so that it’s no longer at fully throttle, but both it and the surrounding stuff are contaminated to some extent with radiation.  And you don’t just wipe radioactivity away with a damp rag.  After that first initiation of self-sustaining chain reaction (i.e., “critical”), everything needs to be handled very differently.  Also, in addition to this, the hydrogen working fluid that we push through the reactor, it too picks up some level of radiation.  No, not a lot.  But under modern safety restrictions, all of that hydrogen would have to be captured and scrubbed clean before release.  Capturing rocket exhaust is not an easy job.  It’s possible but it requires some extraordinary test facility capabilities.


With all this difficulty, how can we conceive of getting through a development program?  A rocket engine development program requires testing because, frankly, we are demonstrably not smart enough to do without it.  One answer:  Split the engine into two pieces.  If you do the rocket part separate from the reactor part, then you can keep the two pieces blissfully in their natural environments, i.e., the rocket part on NASA test stands and the reactor part in the Department of Energy labs.  Focusing on the rocket side (not surprising for me, eh?), the difficulty then becomes in simulating the heat source that is the reactor.  There has been some work done here at NASA MSFC at creating reactor simulators specifically for the purpose of testing subsystem separate from reactors whether those subsystems are rocket engines or power generation systems.  Below is a picture of one such reactor simulator.


In this manner you can minimize or possibly even eliminate for the combined rocket/reactor testing that is so difficult to pull off.

Before nuclear-thermal rockets can be used on missions of the future, there are a number of challenges to overcome, but the potential gains in vehicle and mission performance are impressive.  While this topic doesn’t fall entirely within the realm of liquid rocket engines consistent with the title of this blog, I thought that the similarity of the schematic to expander cycle engines would be of interest.  In this case, rather than a chemical reaction, you have nuclear fission, yet the engine cycle is still a matter of driving a fluid into a place where it gets hot and, from there, is ejected at high velocities.  In this way, a rocket is a rocket is a rocket, even if it is “nu-que-lar.”

Inside the LEO Doghouse: The Art of Expander Cycle Engines

If you go back several generations on my mother’s side of the family, you will find a famous artist named Charles Frederick Kimball.  Also on my mother’s side of the family, in a different branch, a couple of generations later, there was a professional commercial artist.  On my father’s side, my grandmother was a wonderful artist who painted mostly landscapes of the Mohawk and Hudson River valleys in upstate New York.  And, of course, I’m married to an extremely talented artist.  You would think with those bloodlines and that much exposure, I’d have a just bit of artistic ability myself.  You would be wrong.  I love art.  I just can’t make it.


The closest thing that I come to visual expression is confined to Microsoft PowerPoint creations.  However, within that narrow arena, particularly when it comes to engineering subjects, there is still fun to be had.  What we’re going to do for this article is undertake one of my favorite pseudo-artistic hobbies and play with expander cycle engine schematics.

So, let’s start with a simple, happy little cycle called the Closed Expander Cycle.  Most of what you need to know about this cycle is in the name.  First, it is closed.  That means that all of the propellants that come into the engine leave by going through the throat of the main combustion chamber thereby yielding the greatest chemical efficiency available.  Later, we’ll see that the opposite of “closed” is “open.”  Second, it is an expander.  That means that turbomachinery is driven by propellants that picked up heat energy from cooling circuits in the main combustion chamber and nozzle.  Typically, expander cycle engines use cryogenic propellants so that when these propellants are heated they change from liquid-like fluids to gas-like fluids.  Turbines very efficiently make use of gas-like drive fluids.  (Note that I keep referring to “fluids” rather than simply liquids and gases.  That’s because it’s usually a good idea to deal with supercritical fluids in cooling tubes or channels.  Phase changes can be unpredictable and lead to some odd pressure profiles.)


Above is a Microsoft PowerPoint masterpiece illustrating the Closed Expander Cycle rocket engine.  Fuel and oxidizer come in from the stage and are put through pumps to raise their pressure.  On the fuel side, the pump discharge is routed through the main fuel valve (MFV) to the nozzle and the main combustion chamber (MCC) cooling jackets.  I’ve not shown the actual routing here.  Typically, the MCC is cooled first and then, the now warmer fuel is used to cool the nozzle.  The heat loads in the MCC are significantly higher than those in the nozzle.  But whatever is the exact routing of the cooling fluid, the discharge, now full of energy picked up from the process of cooling, is fed into the turbines.  The oxidizer turbine bypass valve (OTBV) shown in the diagram is a means for controlling mixture ratio by moderating the power to the oxidizer turbine.  In some cases, if you have only one mixture ratio setting for the engine, you might be able to put an orifice here rather than a valve.  The turbines are driven by the warm fuel and then the discharge of the turbines is fed through to the main injector and then into the combustion zone.  On the oxidizer side, the routing is much simpler.  The oxidizer pump discharge is plumbed through the main oxidizer valve (MOV) directly into the main injector.  Within the MCC, you have the combustion of your propellants, the resultant release of energy, the generation of high-velocity combustion products, and the expulsion of these products through the sonic MCC throat and out the supersonic nozzle.  Ta-da, thrust is made!

The closed expander is one of the most simple engine cycles that has ever been imagined.  The venerable RL10 engine first developed in the 1950s and still flying today is based on this cycle (with the slight twist that there is only one turbine and the pumps are connected through a gear box – thereby eliminating the need for the OTBV).  This simplicity is both the strength of the cycle and also it’s limiting feature.  Consider the fact that all of the fuel – hydrogen in the case of most expanders – gets pushed all of the way through the engine to finally end up getting injected into the combustion chamber.  All that pushing translates to pressure drops.  It means that the turbines don’t have that much pressure ratio to deal with in terms of making power for the pumps.  In other words, the downstream side of the turbine is the lowest pressure point in the cycle and that’s the combustion chamber.  The result is that your chamber pressure can’t be very high.  That means that the throat of your MCC is relatively large and then that means the expansion ratio of your nozzle and nozzle extension start to get limited simply by size and structural weight.


Also, note that all of the power to drive the entire cycle is provided by the heat picked up by the fuel in the MCC and nozzle cooling channels.  This then becomes a limiting factor in terms of the overall power and thrust-class of the engine.  As an engine gets bigger, at a given chamber pressure, the thrust level increases to the second power of the characteristic throat diameter, but the available surface area to be used to pick up heat to power the cycle only increases by that characteristic diameter to the first power.  In other words, thrust is proportional to “D-squared” but, to a first order, turbine power is proportional to “D.”  Thus, you can only get so big before you can’t get enough power to run the cycle.  One means for overcoming this is to make the combustion chamber longer just to give yourself more heat transfer surface area.  The European engine called the Vinci follows this approach.  But even this approach is limiting if taken too far since a chamber that is too long makes for less efficient combustion and, of course, a longer combustion chamber also starts to get awfully darn heavy.

So, how big can a closed expander cycle rocket engine be?  Well, that’s a point of recurring dispute and debate.  I can only give my opinion.  I would say that the closed expanding cycle engine most useful and most practical when kept to a thrust level of less than approximately 35,000 pounds-force.

Getting back to the notion of artistic expression, what then are the possible variations on the theme of the expander cycle engine?  Well, the themes and variations are used to explore and potentially overcome perceived shortfalls in the Closed Expander Cycle.  The first in this series is the Closed Split Expander, the portrait of which is below:


The shortfall being addressed here is the fact that in the Closed Expander Cycle all of the fuel was pushed all over the engine resulting in large pressure losses.  In this case, some – usually most – of the fuel is pumped to a lower pressure through a first stage in the pump and then another portion is pumped to a higher pressure.  Thus, the fuel supply is “split” and that’s the origin of the name.  It is this higher pressure stream, routed through the fuel coolant control valve (FCCV) that is pushed all over the engine to cool the MCC and nozzle and to drive the turbines.  The lower pressure stream is plumbed directly into the main injector.  The theory is that by not requiring all of the fuel to be pumped up to the highest pressure, you relieve the power requirements for the fuel turbine.  It is always the hydrogen turbopump that eats up the biggest fraction of the power generated in the cycle so this is an important notion.

Does this cycle help?  Yes, some.  Maybe.  The balance of how much to split, what that split does to the efficiency of the heat transfer (less flow means possibly lower fluid velocities, lower velocities means lower heat transfer, lower heat transfer means less power…) makes it not always clear that you gain a whole lot from the effort of making the cycle more complex.  The portrait, however, is nice, don’t you think?  It has a realistic flair, a mid-century industrialist-utilitarian feel.

Next, wishing to express yourself, you can address the age-old issue of the intermediate seal in the oxidizer turbopump.  Take a good look at the first two schematics presented here.  You will see that the oxidizer pump is being driven by a turbine using fuel as a working fluid.  This is a very typical situation with rocket engines, whether they’re expander cycle engine or other cycles.  For example, this is the situation that you have in the RS-25 staged-combustion cycle engine and in the J-2X gas-generator cycle engine.  What that situation sets up, however, is a potential catastrophic failure.  You have fuel and oxygen in the same machine along with spinning metal parts.  If the two fluids mix and anything rubs, then BOOM, you have a bad day.  So, inside oxidizer pumps you usually have a complex sealing arrangement that includes a continuous helium barrier purge to keep the two fluids separate.  For the next expander cycle schematic, however, we can eliminate the need for this complex, purged seal.


This is a Closed Dual Expander Cycle.  It is still “closed” in that everything that comes into the engine leaves through the MCC throat.  The new part is that it is “dual” in that we now not only use the fuel to cool, but we also use the oxidizer.  Thus, we use heated fuel to drive the fuel turbopump and heated oxidizer to drive the oxidizer turbopump.  For this sketch, I’ve used a split configuration on the oxidizer side with a portion of the flow being pumped to a lower pressure and routed directly to the main injector and another portion pumped to a higher pressure, routed through the oxidizer coolant control valve (OCCV), to be pushed through the regeneratively cooled nozzle jacket and then through the oxidizer turbopump turbine.  I’ve done this since you’re likely running the engine at a mixture ratio (hydrogen/oxygen) of between 5 and 6.  You wouldn’t want to push that much oxidizer through the nozzle cooling channels or tubes.  Now, if you’re designing an expander with something like methane as your fuel so your mixture ratio lower, then maybe you can consider a non-split oxidizer side.

Note that with the dual expander approach I’ve gotten rid of the need for the purged seal package in the oxidizer pump and thus I’ve eliminated a potential catastrophic scenario (in the event of seal package failure).  However, I’ve accomplished that at the cost of some cycle complexity.  Also, cooling with oxidizer does not always make everyone happy.  Whenever you have a cooling jacket (either smooth wall or tubes), you always have the potential for cracking and leaking.  If you’re cooling with hydrogen, then a little leakage of extra hydrogen into a fuel-rich environment is a relatively benign situation.  It happens all of the time.  But what if you leak oxidizer into that fuel-rich combustion product environment?  Well, some studies have suggested that you’ll be fine, but it makes me just a little uneasy.  Then, also, you’re using heated oxidizer to drive your turbine.  It can be done, but using something like oxygen to drive spinning metal parts requires great care.  Under the wrong circumstances, a pure oxidizer environment can burn with just about anything as fuel, including most metals.  So, for all your effort to eliminate the seal package in the oxidizer turbopump, it’s not clear to me that you’ve made the situation that much safer.  However, despite these potential drawback, the schematic portrait itself has a certain baroque feel to it with the oxidizer side being positively rococo.

So, you’ve gone this far.  Why not take the final plunge?  Introducing the “Closed Dual Split Expander:”


By now, having stepped through the progression, you understand how it is “closed,” how it is “dual,” and how it is “split” (on both sides this time).  It’s not practical in terms of being a recipe for a successful rocket engine design for a variety of reasons balancing complexity versus intended advantages, but it’s an impressive schematic.  To me, it has a gothic feel, almost like a medieval cathedral with glorious flying buttresses and cascading ornamentation that just leaves you dazzled with details.

So, we’ve wondered off and into the weeds of making expander cycle portraits for the sake of their beauty rather than necessarily their useful practicality.  Let’s return to the more practical realm and question that which has been common to every cycle thus far presented.  It’s been the word “closed.”  Does an expander cycle engine have to be a closed cycle?  Of course not!  Once we’ve made that observation, we come to a very practical option.  Introducing the “Open Expander Cycle:”


 This biggest difference between this and every other previous schematic is the fact that the working fluid driving the turbines is dumped into the downstream portion of the nozzle.  This is a much lower pressure point than the main combustion zone.  The first thing that most people think when they see this cycle is that it must be a lower performance engine.  After all, you’re dumping propellant downstream of the MCC throat.  And, yes, that is an inherent inefficiency within this cycle.  Whenever you expel propellants in some way bypassing the primary combustion, you lose efficiency.  However, here is what you gain:  lots and lots of margin on your pressure budget.  Because I don’t have to try to stuff the turbine bypass into the combustion chamber, I can make my chamber pressure much higher.  In a practical sense, I can make it two or three times higher than in a simple closed expander cycle engine.  What that allows me to do is make the throat very small and that, in turn, provides for the opportunity for a very high nozzle expansion ratio within reasonable size and structural weight limits.  The very high expansion ratio means more exhaust acceleration and, in this way, I can get almost all of the way back to the same kind of performance numbers as a closed cycle despite the propellant dump.


Here, however, is the really cool part of the open expander cycle: I can leverage the high pressure ratio across the turbines such that I can get more power out of a given heat transfer level in the cooling jackets.  Up above, earlier in this article, I suggested that there was a practical thrust limit for closed expanders of approximately 35,000 pounds-force (my opinion) and this was due to the geometric relationships between thrust and heat transfer surface area.  For an open expander, I can design high-pressure-ratio turbines for which I don’t need as much heat pick up to drive the pumps.  Thus, I can make a higher thrust engine.  How high?  Well, my good friends from Mitsubishi Heavy Industries (MHI) and the Japanese Space Exploration Agency (JAXA) have designed a version of this cycle that gets up to 60,000 pounds-force of thrust and I’ve seen other conceptual designs that go even higher.  The folks in Japan already fly a smaller version of this cycle in the LE-5B engine that generates 32,500 pounds-force.  Note that they often refer to this cycle by another name that is very common in the literature and that’s “expander bleed cycle,” with the “bleed” portion describing the overboard dump into the nozzle.  I prefer the designation of “open” since it clearly distinguishes it from the “closed” cycles illustrated earlier.

We have just about reached the end of this article but we have not reached the end of possibilities with expander cycle engine schematics.  That’s what makes them fun and, in my mind, kind of like playing with art.  You can come up with all kinds of combinations and additions.  For example, what if you took an expander cycle and added a little burner?  Over and over I’ve said that the limiting factor for a closed expander is the amount of heat that you pick up in the cooling jackets.  Well, okay then, let’s add a small burner that has no other purpose than to make the turbine drive gas hotter.  The result looks something like this:


This cycle has a gas generator but is not a gas generator cycle since the combustion products from that GG are not used to drive the turbines directly.  Rather, the GG exhaust is piped through a heat exchanger and then dumped overboard.  Yes, you lose a little of your performance efficiency because it’s no longer a closed cycle, but the GG flows can be small and what you get out of it is a boost in available turbomachinery power and therefore potential thrust.  That’s my own little piece of artwork to demonstrate and anyone can do it.


Remember Bob Ross from Public Broadcasting?  I loved watching his show and, as I’ve said, I can’t paint worth a lick.  But his show was relaxing to watch and listen to and he was always so relentlessly supportive.  There never were any mistakes.  Everything could be made all right in the end.  And anyone could make pretty mountains and happy little trees.  I’d like to suggest that the same is true about my little hobby of assembling happy little expander cycle schematics.  No, most will probably never be built or fly and the schematic portraits will probably never grace the walls of MOMA, but that’s okay.  My artist grandmother used to tell me that sometimes the purpose of doing art was not necessarily found in the end product, but instead as part of the journey of creation.

Inside the LEO Doghouse: Light My Fire!

This article is the second part of the story focused on how we start rocket engines.  In the last article, we discussed the matter of delivering propellants – oxidizer and fuel – into the combustion zone.  In this article, we will discuss how these propellants become fire and smoke (…or steam).  Of course, the musical reference for which you’re waiting ought to be based on the title of this article and the song by the Doors.  Right?  Well, with all due respect to The Lizard King, I would prefer to reference here the immortal writings of The Boss:

I will now be so bold as to translate Mr. Springsteen’s words into functional advice for rocket engines.  Sitting around and crying or worrying about the world are both passive, energy-draining activities.  The only way to start a fire is to add energy, e.g., a spark, to the situation.  He’s absolutely right.  And I would just bet that you never knew that The Boss was a rocket scientist.

In an article about combustion instabilities many months ago, I used the image below to illustrate a situation of limited stability.  The ball sitting on top of the hill will sit there forever unless or until something disturbs it.  Give it a little bump, i.e., an insertion of energy, and the whole scenario rapidly changes with the ball speeding down the hill.


This is also how I think about the process of ignition for typical, non-hypergolic (see previous article) propellants.  You can have fuel and oxidizer sitting around, intermixing, but until it gets that bump of an insertion of energy, there is no combustion.  No combustion; no high-energy gas production.  No high-energy gases; no propulsion.

Let’s start from the other end.  For a moment, think about a fire in your fireplace.  Once you’ve got a good fire up and going as in the picture below, you don’t have to re-start the fire each time that you add a log.  The existing fire sustains itself so that the energy produced by the combustion in one moment is sufficient to continue the fire into the next moment using additional fuel (the wood) and oxidizer (from the air).


This is generally the case for rocket propellants as well.  Once the fire is lit (i.e., once the ball is rolling downhill), the process is self-sustaining.  So, the whole issue about making a fire really does come down to the start of the process.

How many different ways can you start a fire?  One way is to use another fire.  Think about the folks running around the countryside with the Olympic torch before the games.  They use that torch to light another torch to light another torch, and on and on, all of the way until they light the big torch in the stadium.  Another way to start a fire is to use heat.  That, effectively, is how I lit a cigar the other evening.  I used friction to generate heat to ignite a match.  Then, holding the match like an Olympic torch, I used that fire to light the fuel of the cigar tobacco.  This model of a cascading series of larger and larger fires is used over and over in different forms.  Thus, when we talk about starting a fire, we often have to discuss not only the small initial energy bump, but then also the chain of events leading to the complete, steady state process.

So, first we have the initiation, or as The Boss said, “the spark.”  Off the top of my head, I can think of four ways that we’ve practically implemented on rocket engine systems to provide that initial energy boost and one other way that, to date, remains somewhat experimental.  There may be others, but these are the ones that are most obvious and frequently used in different forms.

The first method is exactly what The Boss calls for, an electrical spark.  In most cases when lighting liquid propellants directly, the components on rocket engines used to make electrical sparks are not a whole lot different than higher-energy, more robust, and more reliable versions of the spark plugs that you’ve got in your automobile.  They use a high-voltage electrical circuit to make a spark jump across a gap thereby exposing whatever is around that gap, namely vaporized propellants, to ionizing electrical energy.


The second method also uses electrical energy but in this case rather than making a spark, you use it to make heat.  Think about an incandescent light bulb (i.e., the bulbs rapidly becoming old fashioned these days).  The intent of the wire filament is to produce light.  And it does.  But is also produces heat.  What if you apply that heat directly to a combustible mixture?  Depending upon the mixture, that’s all you need.  I’ll explain more below when we talk about the cascade.

These first two methods rely on electrical energy and that’s always convenient since wires are easy to run.  While it’s true that the ultimate power source can be heavy for the vehicle (batteries for example), the rest of the system is relatively light and easy.  The third method for providing that initial energy bump is not quite so clean.  Rather than relying on transferring electrical energy into a chemical reaction, it uses a transfer of energy from one chemical reaction to kick off another chemical reaction.  In the previous article we discussed hypergolic propellants.  These are propellants that combust spontaneously when they come into contact with each other.  They don’t need any energy boost to start reacting.  Well, what if you had a fluid that did that when it came into contact with your primary fuel or primary oxidizer?  You could squirt in some of this spontaneously combusting stuff, light off a small bit of your fuel or oxidizer, and then the energy for that small fire could light off the rest of the propellants.  This is a common means for starting kerosene (also called RP-1) engines.  The massive F-1 engine used as part of the Saturn V vehicle was lit by a hypergolic ignition system for the main combustion chamber.  The most common hypergols for this purpose are triethylborane (a.k.a., triethylboron), triethylaluminum, or some mixture of the two.


The fourth and last method that I can think of for supplying that initial energy bump again starts with electricity, but instead of generating a localized spark or heat, you transform the electrical energy into a laser.  I will not even begin to pretend that I know much about lasers other than the fact that they can provide a very focused, directed beam of energy, photon energy in this case, to exactly where you want to put it.  You can use that energy to make heat for ignition or – and now I’m way beyond my knowledge base – you can tune the wavelength to excite the propellant molecules directly.  I have a friend in Germany who has experimented with using lasers for rocket engine ignition.  Thus far, I know of no fielded rocket systems where this ignition method is used (although I’ve been told that the Russians have demonstrated it on a full-scale engine), but it offers some very interesting possibilities.


So, we’re done, right?  After all, you’ve got your spark (or some other energy boost) so you’re lit and ready to go.  Well, not always.  For the most convenient ignition source, specifically the electrically-flavored ones, our bump in energy, our spark or heat, is usually very localized.  Rocket propellants are usually highly energetic and that’s why they’re rocket propellants.  But that also means that you have to light the fire well.  I struggle with how to explain this in a positive sense, so I’ll explain it in the negative, i.e., tell you what you do not what to do.

In your combustion zone, you do not want to ignite just one small space, i.e., one corner, and let the fire spread unevenly.  A fire on one side of a combustion zone but not the other could allow unburned propellants to momentarily “pool” in the one region.  This could lead to detonation and/or conflagration pressure waves bouncing around your chamber until everything evens out.  That can be extremely dangerous to the point of tearing apart the engine.  Or maybe, because of these pooled, unburnt propellants, you get mixture ratios that cause hot streaks.  Most practical combustion chambers are not built to accommodate stoichiometric or oxidizer-rich combustion (unless it is specifically an ox-rich preburner where it is should be very ox-rich to avoid this same issue).  A localized phenomenon of a slight ox-rich ignition could burn a hole right through a combustion chamber wall.  Or, if you’re talking about a gas generator or a preburner, you could get hot streaks that damage turbine components.  I have seen the kind of damage that can be done in a turbine due to ox-rich hot streaks for just fractions of a second.


Ideally, what you want is for your propellants to arrive and, blammo, everything it lit.  That “blammo” can be difficult to achieve with a localized energy into like a spark or a small electrical heat source, especially for larger engines.  To overcome this issue, we turn back to the simple analogy of the fireplace.  There, we go from the localized effect of the match, to perhaps a ball or two of crumpled newspaper, to shavings or kindling, to larger sticks, to eventually the logs.  So there is a cascade of events from small and localized to large and generalized.  I will give you two examples of how we apply this concept in rocket engines.

The J-2X gas generator has a pyrotechnic ignition system.  It’s quite easy to tell people that we ignite the GG with little, solid propellant charges.  Okay, but is that the whole story?  No, it’s not.  The solid propellant charge (think about little Estes® rocket motors) is just the fire-lighting-the-fire end of the process.


It all starts with electrical current running through an igniter wire.  The electrical resistance of the igniter wire causes heat as the current passes through.  That heat is enough energy to push what’s called the “pryogen” into ignition.  You can think of the pryogen as being like the stuff on the head of a match.  Other flammable substances are often used but the idea is still the same.  That little fire in the initiator ignites the solid propellant and the solid propellant then shoots hot gases into the GG during the engine start sequence to ignite the hydrogen and oxygen just as they arrive.  Pyrotechnic igniters like this are highly reliable.  If that electrical current arrives, everything beyond that is pure chemical chain reaction that produces a powerful blast of ignition energy.  On the negative side, such an igniter can only be used once.  I guess that you could inspect and refurbish elements of the piece, but considering the trauma of the process it experiences, it is easier and cheaper to simply replace the whole thing.

Another example of the concept of using an ignition cascade can be found on the J-2X in the form of the torch igniter used for the main injector.  Here’s an interesting little piece of history (as it’s been told to me).  The J-2 engine, back in the 1960’s was a pioneering effort.  While the RL10 was already flying, the use of hydrogen as a propellant was still something relatively novel.  For the J-2 main injector they developed a torch igniter system.  That system was later adopted and modified slightly for use as the ignition system for the Space Shuttle Main Engine main injector and both preburner injectors.  When we came to the development of J-2X, we started with our many years of successful experience with the SSME torch ignition system, made some modifications and, through a dedicated test program at the igniter level, effectively revalidated and expanded upon the pioneering efforts of the 1960’s.  It’s good to be part of another small step in that long and successful history.


The torch igniter concept starts with an electrical spark from what really looks like your ordinary automobile spark plug.  But such a spark is very small, very localized.  So what you do is swirl into that localized area just a little hydrogen and oxygen.  This is the kindling.  The electrical potential across the gap of the spark plug causes the gasified propellants to ionize and become very hot, hot enough to start to spread the fire and, from that, thereby creating a flame front.  That flame is then directed into the combustion zone just as the rest of the propellants are reaching the injector.  The whole igniter system is effectively a torch ejecting a flame into the combustion zone.  In the J-2X (and in the SSME and in the J-2), the torch is right in the center of the injector face.


Okay, so there you have it, in two articles, how to get a liquid rocket engine up and going.  First, you have to get the propellants moving to the right places and, second, once they’re there, you’ve got to light the fire.  For large rocket engines, the whole process from the receipt of the start command from the vehicle until the engine is functioning at full power level takes anywhere from about three to six seconds.  During that time, pumps have to start spinning, valves have to open, propellants have to reach their destinations in the correct proportions, and the ignition source has to try to light the fire not too early and not too late.  It really is quite an orchestration of events across a brief period of time.  And the more complex the engine, the more difficult it is to get the orchestration right.

Looking into the database for SSME history, the very first test was conducted on 10 May 1975 with development engine #1 on test stand A1.  It was not until the forty-second test of the test series, nearly ten months later, that they eclipsed five seconds of firing duration and reached true mainstage operation.  So, it was not easy making that orchestration work.  Over the years, I’ve had the opportunity to meet and work with a handful of the folks who were there figuring out how to make the SSME work.  They were all very impressive engineers and thank goodness since we are still benefitting from their efforts.  And with that final historical note, we end this article with some more words of wisdom from The Boss:


Inside the LEO Doghouse: Start Me Up!

A number of years ago, I decided that I needed to read all of the great books that my educational journey had somehow missed along the way.  It is, of course, only well after one makes such an idiotic declaration and subsequent commitment that one recognizes the inherent idiocy in such a thing.  The bottom line is that there are a lot of really, really good books out there.  A whole lot.  Lots and lots.  And so, nine years into the effort (note that my idiocy has always been equally matched by my tenacity – not generally a good mix), I just recently read “The Grapes of Wrath” by John Steinbeck.  In one scene, Tom Joad can’t start their Hudson Super Six converted sedan/truck because the battery charge is too low.  So his brother, Al, gets out and fits the hand crank to the crank case and they start the engine with good, old fashioned elbow grease.

Image of the Joad's 1926 Hudson Super Six from "Grapes of Wrath" (1940), 20th Century Fox
Image of the Joad’s 1926 Hudson Super Six from “Grapes of Wrath” (1940), 20th Century Fox

While modern cars no longer have provisions for hand cranks, it occurred to me that there used to be at least three different ways to start a car.  You could use the electric motor, i.e., the starter.  You could use the hand crank.  Or, you could get the vehicle rolling and pop the clutch (a compression start).  I only have one of those three choices with my current vehicle, but the whole topic got me to thinking about the different means by which we start rocket engines rather than automobile engines. 

Just as with automobiles, there are different ways to start rocket engines and which way you choose is dependent upon the mission and circumstances of the start.  Some questions to ask include where do you intend to start the engine?  Will it be on the surface of the earth or the surface of the moon or somewhere in space in between?  Another question to ask is how many times within a mission does the engine need to be started?  And, getting down to the details, what are the propellants being used and what is the engine cycle?  Once you can answer these questions, then you can start to do trades studies with the stage and, ultimately design your start system.

Okay, but when we say “start system” what do we mean?  For combustion engines, whether it is a rocket engine or an automobile engine or a jet engine, what you need to start are fuel and oxidizer mixed and in the correct environment and, if necessary, an energy bump for ignition.   In an automobile, the oxidizer comes from the air (gaseous oxygen) and the fuel comes from the gasoline tank.  The fuel and air are either made into a combustible mixture in the carburetor or, if you have fuel injection, they are mixed directly in the cylinder.  The optimal environment to get power out of the combustion is when this mixture is at an elevated pressure and so you have the moving piston within the cylinder compress the mixture before the spark plug provides the energy bump necessary to ignite the mixture.  Once ignited, the mixture undergoes a rapid chemical reaction that releases energy to push the piston down in the cylinder and, in concert with the other pistons strategically placed along the crankshaft, move the automobile forward. 


All of the principal parts of the process are necessary in order to make the engine work.  Gasoline all by itself isn’t of much use.  A puddle of gasoline, even in the presence of air doesn’t make for much more.  Now, if you drop a match into a puddle of gasoline sitting in air, then you’ve got a fire and lots of heat, but you don’t really have the explosive power that you need to move a car.  What you need is a good mixture between the gas the air, meaning that you have as close to a uniform mixture between the two and you need that mixture pressurized before ignition.  Thus, you need to have the fuel and oxygen delivery systems working and you need to have the pressurization system going and you need to have the well-timed spark to initiate the chemical reaction.  In an automobile, the feed system and the timing of the spark are all tied to getting the crankshaft moving.  So, once you’ve got the crankshaft spinning, the rest kicks in pretty much automatically.  That’s why you can start a car with an electric motor (normal start) or, if you have a manual transmission, by dropping it into gear once the car is rolling (using the motion of the car to get the shaft moving), or, as in “The Grapes of Wrath,” by having your brother hook up a hand crank and giving it a good heave ho!  Each method is intended to get the shaft spinning, initiate the propellant feed system, and provide the timely spark.  The “start system” for an automobile is that which gets the shaft spinning.


A standard large bi-propellant liquid rocket engine is startlingly similar to an automobile engine with regards to what’s necessary to get the thing started.  When I say “large bi-propellant,” I’m distinguishing the larger rocket engines that we normally discuss here in the blog from little thrusters used on spacecraft.  Very often these thrusters are monopropellant meaning that they consist of a tank feeding a chamber where the single propellant decomposes or vaporizes and then accelerates through a throat and expanding nozzle.  For such a system, “starting” amounts to not much more than ensuring that the propellant tank is pressurized and then opening a valve.  Also, I should mention that there is whole class of small bi-propellant thrusters that use hypergolic propellants.  These propellants combust spontaneously when they come into contact with each other.  There’s still an oxidizer and a fuel, but you don’t need any spark or fire to get them going.  Because they are low-thrust rockets, small bi-propellant engines typically use pressurized tanks to feed the propellants into the combustion chamber.  There the propellants meet, react, and generate hot voluminous gases directed through a nozzle to make thrust.  The start system, as with the monopropellant rockets, is a set of valves.  The hypergolic propellants themselves take care of the rest.

Side note:  Hypergols (that’s the German rocket scientist word from the 1940’s that led to our word “hypergolic”) are almost without exception nasty substances.  If you have a hypergol spill on the launch pad, for example, you usually have to evacuate all personnel and send out a special clean-up crew.  Hypergols are used a great deal in the manner shown above for our launch vehicles when thrust needs are higher than what can be provided by the little monopropellant thrusters, but not so great that the system needs to have something like an RL10.  However, the Soviets/Russians/Ukrainians have built some extremely pump-fed and powerful hypergolic engines, one nearly as powerful as the F-1 in terms of thrust.  They still use some of them today.  Also, the Chinese use hypergolic propellants for launch vehicle primary propulsion.  That’s not something that we typically do here in the United States.  Indeed, we have whole programs dedicated to figuring out how to be less dependent up on such nasty propellants.  However, because of the inarguable simplicity and resulting high reliability of this kind of rocket, this was what was used to launch the crew module off of the surface of the moon during the Apollo Program (LMAE – Lunar Module Ascent Engine built by Bell Aerospace and Rocketdyne).  Sitting on a celestial body other than Earth is not any place where you want to question the likelihood of having your engine start so you make it as simple as possible (and, besides, the local residents don’t complain much about the noxious plumes).

Apollo 16 ascent from the lunar surface 23 April 1972.  On board were John Young and Charles Duke.  They were met in lunar orbit by T.K. Mattingly.
Apollo 16 ascent from the lunar surface 23 April 1972. On board were John Young and Charles Duke. They were met in lunar orbit by T.K. Mattingly.

That then leaves what I really want to talk about and that’s start systems for big rocket engines, bi-propellant and pump fed.  So, as I was saying, what you need to start a rocket engine of this class is startlingly similar to what you need to start an automobile.  You need fuel and oxidizer, in the right conditions, and you need something to start the combustion.  So, how do you get propellants into your combustion chamber?  In a large pump-fed system, your stage propellant tanks are typically not designed for very high pressures (since that would make them far too heavy for flight) so if you just rely on a pressure differential to push the fluid, it’s not going to move very fast. 

However, there are engines that do actually perform engine starts using only what the tanks have to give the engine.  The amazing RS-25 (formally the Space Shuttle Main Engine) is one such engine.  The very early portion of the start sequence relies only on pressure in the propellant tanks and the pressure resulting from being in a gravitational field (head pressure = ρgh) to feed propellants into the preburners that drive the turbopumps.  Then, once the engine gets a little fire going there and starts to provide power to the turbopumps, they take over authority and control as the system comes up to full operating pressures.  Another engine that starts with only tank pressure is the RL10.  This is quite impressive when you realize that the RL10 starts in space, with no help from gravitational head pressure.  It’s also impressive since the RL10 is an expander cycle engine meaning that what is used to drive the turbine is gas heated through the regenerative cooling passages of the combustion chamber.  But when the engine just starting, there is not yet any fire in the combustion chamber so it’s using only whatever residual heat in the metal to provide the energy to get the pumps going.  Since the RL10 has to start multiple times within a given mission, you can imagine how the shutdown from the previous start might have an influence on the next start.  Shutting down with everything cold is generally the safest way to go, but if you leave everything too cold, then you might not have enough residual heat in the system for the next start.  In both cases, for the RS-25 and for the RL10, this process of starting with only low pressure propellants and relatively slowly building up pressure is called “boot-strapping” (as in “pulling yourself up by your own bootstraps”).


In order to get a more powerful start sequence you can use what is known as “spin start.”  A spin-start system uses a high pressure or high energy gas to get the turbines spinning prior to the rest of the start sequence.  This effectively pre-pressurizes the whole system.  There have been different gas sources designed over the years for spin start.  The original J-2 had a small, high-pressure tank built right onto the engine.  For the S-IVB stage (the Saturn V vehicle third stage), after engine start and during operation, the J-2 engine re-filled the start tank with high pressure hydrogen.  This refilling was necessary since an engine restart was necessary for the kick firing that sent astronauts to the moon.  An experimental engine derived from the J-2 in the 1970’s called the J-2S used solid-propellant gas generators as the spin-start source.  These were, just as the name implies, little solid propellant rockets that put out a lot of hot gas for a short duration, just long enough to get the turbopumps spinning, typically only a couple of seconds.  For J-2X, we considered both the J-2 and the J-2S options for spin start.  In the end, however, we chose a third path.  The J-2X engine uses very high pressure helium supplied by the vehicle stage as the spin-start gas.  Note that for the J-2, J-2S, and J-2X engines the spin-start gas is fed directly into the same turbines used during engine mainstage operation.  A frequent design approach used for Soviet and Russian rocket engines is to have a separate, dedicated, and optimized spin-start turbine on the same shaft as the primary turbine.  This approach is generally best suited for kerosene engines since it is quite common to have the liquid oxygen and kerosene pumps on a common shaft.

Image2Whether you choose to design a boot-strapping system or a spin-start system is a decision with a number of variables and considerations.  A boot-strapping system is nice since you don’t have any auxiliary start systems to worry about.  But often, in order to get such a system to work, the system valves need to be very carefully manipulated to keep the boot-strapping process going.  Well, that careful valve manipulation implies a more complex valve actuation system than just simple open-closed valves.  On the other hand, you could go with a spin-start system, use a simpler valve actuation system, and possibly get more powerful, faster, and more repeatable engine starts, but then you’re paying the price in vehicle performance for having to carry along an additional auxiliary system.

lady2What are the trade factors on weight of one approach versus the other?  How about development or production cost?  And, here’s a tough one, which one is more reliable (and, if humans are on the flight, therefore safer)?  Do these factors change if we have to start the engine more than once during the mission?  The answer to this last question is yes, of course, but not always how you would expect.  These are the kinds of trade studies that you have to do as part of engine and stage design.  There is no singular solution.  Even general ground rules are tough to distill from our history to date.  The need is clear:  start to feed propellants to the combustion chamber and have the system build up pressure.  How you get there has multiple potential answers.  Unlike an automobile, however, nobody has yet proposed using an electric motor for large engines since that motor would represent too much useless weight to carrying along with you in flight.  Also, so far as I know, nobody has yet suggested a hand crank for a rocket engine.  However, we do have evidence of a hand crank start for a gas generator test…

In the next article, I will discuss the issue of what you do with the propellants once you get them to the combustion chamber.  That is the topic of ignition.  Oh, and by the way, in terms of John Steinbeck’s books, I would more highly recommend “East of Eden” over “Grapes of Wrath.”  But then, what do I know?  I’ve got another eight gazillion books to go in my education…

P.S., Just in case there is any confusion … that video was not real.  It was assembled by our own Dennis Olive and David Reynolds (both MSFC) several years ago when we were testing the J-2X workhorse gas generator.

J-2X Progress: November 2013 Update

It’s been a few months since we talked about J-2X development progress.  So, let me bring you up to date.  Here’s the short version:

  • Testing for engine E10002 is complete
  • Engine E10003 has been installed in test stand A-2 and has successfully completed its first test (a 50-second calibration test on 6 November)
  • Engine E10004 is in fabrication

Okay, so that’s it.  Any questions?

Oh, alright, I’ll share more.

Engine E10002 is the first J-2X to be tested on both test stand A-2 and on A-1.  It saw altitude-simulation testing using the passive diffuser on A-2 and it saw pure sea-level testing on A-1 during which we were able to demonstrate gimballing of the engine.  Below is a cool picture from our engineering folks showing a sketch of the engine in the test position on A-2.


The clamshell shown in the sketch effectively wraps around the engine in two pieces and the diffuser comes up and attaches to the bottom of the clamshell.  This creates an enclosed space that, while the engine is running, creates the simulated altitude conditions.  I’ll show some more pictures of the clamshell when we talk about engine E10003 below.  Next is a cool picture of engine E10002, hanging right out in the open, while testing on A-1.


This next table gives a history of the engine E10002 test campaign across both test stands:


So, let’s talk about the three times that we didn’t get to full duration.

The first time, on test A2J022 we had an observer cut.  Just like that sounds, there was actually a guy watching a screen of instrumentation output and when he saw something that violated pre-decided rules, he pushed the cut button.  We use such a set-up whenever we’re doing something a little unusual.  In this case, we were making an effort to reduce the amount of cooling water that is pumped into the diffuser.  It was our general rule-of-thumb to “over cool” the diffuser.  After all, who cares?  It’s a big hunk of facility metal that we wanted to preserve for as long as possible despite the fact that it always gets a beating considering where it sits, i.e., in receiving mode for the plume from a rocket engine.  However, one of the objectives for our testing was to get a good thermal mapping of the conditions on the nozzle extension.  What we’d found with our E10001 testing was that all of the excess water that we were pumping into the diffuser was splashing up and making our thermal measurements practically pointless.  Thus, we had to take the risk of reducing the magnitude of our diffuser cooling water.  As I’ve said many times, there are only two reasons to do engine testing: collect data and impress your friends.  If our data was getting messed up, then we had to try something else.  Eventually, through the engine E10002 test series we were able to sufficiently reduce the diffuser cooling to the point where we obtained exceptionally good thermal data.  This first test on which we cut a bit early was our first cautious step in getting comfortable with that direction.

The second early cut was caused by some facility controller programming related to facility instrumentation.  Here is a little tidbit of neato information that I’ve probably not shared before about our testing: in the middle of longer runs, we transfer propellant from barges at ground level upwards and into the run tanks on the stand.  The run tanks are pretty big, but they’re not big enough to supply all of the propellants needed for really long tests.  When I’ve shown pictures of the test stands in the past, you’ve seen the waterways that surround and connect all of the stands.  These are used to move, amongst other things, barges of propellant tanks.  Liquid oxygen is transferred using pumps and liquid hydrogen, being much lighter, is transferred by pressure.  In the picture below, you can see a couple of propellant barges over to the left.  This is an older photo of a Space Shuttle Main Engine Test on stand A-2.


Thus, in addition to monitoring the engine firing during a test, you also have to watch to make sure that the propellant transfer is happening properly.  The last thing that you want to happen is have your engine run out of propellants in the middle of a hot fire test.  On test A2J025, there was an input error in the software that monitors some of the key parameters for propellant transfer.  Thus, a limit was tripped that shouldn’t have been tripped and the facility told the engine to shut down.  Other than some lost data towards the end of this test (data that was picked up on subsequent testing), no harm was done.

On test A2J027, there was something of an oddball situation.  We have redlines on the engine.  What that means is that we have specific measurements that we monitor to make sure that the engine is functioning properly.  During flight, we have a limited number of key redline measurements and these are monitored by the engine controller.  During testing we’ve got lots more redline measurements that we monitor with the facility control system.  When we’re on the ground, we tend to be a bit more conservative in terms of protecting the engine.  The reason for this is that when we’re flying, the consequences of an erroneous shutdown could mean a loss of mission.  Thus, we have different risk/benefit postures in flight versus during ground testing.  [Trust me, the realm of redline philosophy is always ripe for epic and/or sophist dissertation.  Oh my.]  Anyway, with regards to test A2J027, when doing ground testing we shutdown not only when a redline parameter shows that we may have an issue (as happened erroneously on test A2J025) but also if we somehow lose the ability to monitor a particular redline parameter.  Thus, we did not shutdown on test A2J027 because we had a problem or because we had a redline parameter indicating that we might have a problem.  Rather, we shut down because we disqualified a redline parameter.

On J-2X, wherever we have a critical measurement (meaning that it is a parameter that can control engine operation, including redline shutdown) we have a quad-redundant architecture.  In the sketch below, I attempt to illustrate what that actually means.


 Thus, we have two actual measurement ports and each port has two independent sets of associated electronics.  We are doubly redundant in order to ensure reliability.  However, does a man with two watches ever really know what time it is?  No, he doesn’t because he cannot independently validate either one.  We have a similar situation, but in our case we simply want to make sure that none of the measurement outputs that we are putting into our decision algorithms are completely wacky.  So we do channel-to-channel checks and we do port-to-port checks to make sure, at the very least, some level of reasonable consistency.  Thus, we cannot know the exact answer in terms of the parameter being measured, but we can decide if one of the measurement devices themselves is functioning improperly.  This process is called sensor qualification.  On test A2J027, our sensor qualification scheme told us that one port was measuring something significantly different than the other port, different enough that something was probably wrong with at least one of the sensors.  That resulted in disqualifying the measurements from one of the two ports.  In flight we would have kept going unless or until the remaining port measurements notified us of a true problem, but on the ground, as I discussed, we are more conservative.

When we investigated the apparent issue, what we discovered was that we should have predicted the port-to-port offset.  It turns out that due to the engine conditions that we’d dialed up for that particular test, we were running the gas generator at a mixture ratio higher than we’d yet run on the engine.  When we went back and examined some component testing that we’d done with the workhorse gas generator couple of years ago, that data suggested that yes indeed, when we head towards higher mixture ratio conditions, our two measurements tend to deviate.  This suggests, perhaps, a greater amount of localized “streaking” in the flow at these conditions.  Localized effects like this are not uncommon in gas generators or preburners.  Because of the particular configuration of the J-2X, with more mixing available downstream of the measurements, the impact due to such variations on the turbine blades is minimized.  This too was shown in the component level testing.  Thus, the sensors were fine and the engine was fine.  It was just out qualification logic that needed reexamination.  Sometimes, this is how we learn things.

So that tells you all about those handful of cases where we didn’t quite get what we intended.  Overall, however, the engine E10002 test campaign was truly a rousing success.  Here are some of the key objectives that were fulfilled:

  • Conducted 13 engine starts – 10 to primary mode, 3 to secondary mode – including examinations of interface extremes for a number of these starts.
  • Accumulated 5,201 seconds of hot-fire operation.
  • Performed six tests of 550 seconds duration or greater.
  • Conducted eight “bomb” tests to examine the engine for combustion stability characteristics.  All tests showed stable operation.
  • Characterized nozzle extension thermal environments.
  • Characterized “higher-order cavitation” in the oxidizer turbopump.
  • Demonstrated gimbal operation (multiple movement patterns, velocities, accelerations) with no issues identified.

Hot on the heels of the success of engine E10002, we have engine E10003 assembled and ready to go.  I love this picture below.  This is the engine assembly area.  We have three engine assembly bays and, on this one special occasion, we happened to have each bay filled.  Engine E10001 is all of the way on the left.  It is undergoing systematic disassembly and inspection in support of our design verification activities.  Engine E10002 had just come back from its successful testing adventure.  And engine E10003 is all bundled up and ready to travel out to the stands to begin his adventure.  [I’m not sure why this is the case, but E10003 has a male persona in my mind so the possessive pronoun “his” seems to fit best.]

e1In the picture below you see E10003 being brought into the stand on A2.  Note the water of the canals in the background.  See the concrete pilings over to the left in the background.  Those are where the docks are for the propellant barges that we discussed above.

enginelake In the pictures below, you can see E10003 installed into the test position.  The picture on the left shows half of the clamshell brought down into place.  Compare this picture to the sketch at the beginning of this article.  The picture on the right shows what the engine looks like with both halves of the clamshell brought down into position.

doubleimage So that’s where we stand.  Engine E10003 has begun testing in November 2013 and continue on into 2014.  As always, I will let you know how things are going and if anything special pops up, you can be sure that we’ll discuss it here at length.  After all, there’s not a whole lot that’s more fun than talking about rocket engines.


LEO Extra: The Engine as a House

Imagine that you want to build a house.  I have never built a house or had a house built for me but I know a number of people who have.  They have all used different words to describe their experience, but if I could encapsulate their descriptions into a single expression it would be this, “Building a home from scratch is an adventure.”  That word, “adventure,” is useful here since it can mean a whole spectrum of positive and negative experiences.  For this article, I would like to describe the analogous adventure of rocket engine development.  While the processes of building a house and developing a rocket engine do not mirror each other exactly, they do have some similarities from the perspective of project management and procurement considerations.


Why does one build a house?  Because one perceives a need.  Is it an absolute need?  Are there not other houses around that might fit it bill?  Yes, perhaps.  But a need is perceived and that need is translated to an overall objective with an array of attendant requirements.  I once talked with a technician at a small company in California that was doing some very specialized testing for us out in the desert.  This was during development of the X-33 vehicle.  In years past, he had worked for a housing contractor and he was describing some of the more odd custom configurations for houses that they’d been contracted to build.  One that I remember him describing was a 20,000 square-foot home (over ten times the gross size of my own home) with all kinds of specialized rooms, seven full or partial bathrooms, but only one bedroom.  I assume that such a home fit the bill for someone with particular needs but I cannot imagine that it would do very well on the re-sale market.

The point, however, is that you start with your objective, “I want a new home built to my needs,” and then you list out your needs.  Maybe you want a game room.  Maybe you have a large family and need seven bedrooms.  Maybe you or your spouse is a gourmet cook and so you want a kitchen that fulfills that talent.  Or maybe you have a specific, scenic plot of land you want something that fits into the terrain a la Frank Lloyd Wright and the Fallingwater house.  Whatever it is that is your combination of requirements, you have decided that there is nothing on the market that meets these needs and so you are going to build a new house from scratch.


That is not too different from what we do with rocket engines.  It starts with the mission and the mission is translated to a vehicle and then the needs of the vehicle become propulsion needs and therefore rocket engine requirements.  In the past, I’ve talked about process of requirements decomposition along these lines.  In some cases, you can find something “on the market” that fits the bill.  You can think of the RS-25 for the Space Launch System (SLS) along these lines.  The RS-25 was the Space Shuttle Main Engine and now, with some adaptations, it will become the core-stage engine for the SLS Program.  It’s like a house on the market that would meet your needs with a few renovations.  In other case, you look around and nothing quite fits the bill.  That is like J-2X.  That effort has been like building a house from scratch.

So, you want to build a custom house?  What’s next?  Well, you hire an architect to design the thing.  Interestingly, on the subject of NASA and rocket engines, we at NASA sometimes do some of the architectural design work ourselves.  This was especially true in the past.  It was as if you were yourself the architect and you could design much of the house yourself.  These days, however, it more often becomes a collaborative process for the architecture of the engine and that’s probably also quite true when building a house in that the architect and the owner work together in developing the final plans.  Who then is responsible for building the house?  Well, that’s your general contractor.  On the rocket engine side, that is what we call our prime contractor.  They are responsible for delivering the final product.  In some cases, the architect and the general contractor function as a unit when building a house.  Other times, they are separate (and that, I’ve been told, can lead to all kinds of “fun”).  For a rocket engine, there is rarely a separation between architect and contractor other than the degree to which NASA assumes the role of architect.


Above is a simplified sketch of how a new custom house comes to be (I am really quite the artist, aren’t I?).  For a rocket engine, everything to the right of “owner” is what we contract out.  These days, depending on the project, we contract out anywhere from 70% to 90% of the expense of developing a rocket engine.  Thus, our “government engine” is almost always, in reality, the product of a commercial company.

Over to the right of the little diagram, I show various entities listed as “subcontractors.”  For a home construction effort, these could be plumbing and electrical subcontractors.  They could be the roofers or perhaps the landscape folks.  It is rare that a general contractor maintains all of these disciplines within his own shop.  Sometimes the general contractor is nothing but a manager and all of the real construction work is done by hired subcontractors.  Sometimes, the general contractor does have his own capabilities and only hires out for a couple of specialized things.  It all depends upon the business model that is found to work best.

The same is true for rocket engines.  Our prime contractor for J-2X and RS-25, Aerojet Rocketdyne, has a number of exceptional in-house capabilities.  But there are also a number of things that are obtained by going through subcontractors or vendors.  In our world, we typically think of “subcontractors” as a company that is working on and delivering a large piece of the engine like, for example, the engine controller whereas we typically think of “vendors” or “suppliers” as those who supply smaller pieces (but, admittedly, the lines across these definitions sometimes blur).  Of the latter, vendors and suppliers, we have many examples.  One is a small company that specializes in the fabrication of high-quality metal bellows.  This is an extremely important and difficult skill.  There is another, somewhat larger company that specializes in making tubes.  While that sounds straightforward, consider the tubes that make up the RS-25 nozzle.  These are about six-feet long with a cross-sectional profile that changes in size and shape along its entire length.  And the necessary tolerances are extremely tight.  There probably are not many companies on the planet who can deliver what is necessary make the RS-25 nozzle a reality.  Or in some cases, as with the main combustion chamber, while the prime contractor does much of the machining and assembly work in house, they are dependent upon a supplier for the specialized alloy that is used for combustion chamber liners.  It’s not something that you can get off the shelf of the local hardware store.  The truth is that we depend upon a broad array of exceptional subcontractors, vendors, and suppliers to make rocket engines a reality.


 The next issue to consider is what you do as owner as your house is being designed and built.  On the one hand, you could walk away, take a trip to Nepal, and come back to find your house complete, but if you’re anything like me, that’s not what would happen.  As I said, I’ve never been through the process personally, but if I was going to do it, I would be checking in on the progress on a regular basis.  Now, I am quite sure that there is a balance to be maintained such that insight doesn’t become interference, but it is, after all, my money being spent to pour the concrete and put up the framing and hang the doors, etc., so I ought to be able to have some access and right to judge the effort as it goes along.  And, should we run into schedule delays and cost overruns, then I ought to have the right to raise an indignant ruckus.

Again, the same is true for our rocket engines except that we are spending a great deal more money than the cost of your typical house and so our processes for contract performance surveillance are a bit more involved and formal than showing up at the construction site with a level and a plumb line at the end of the workday.  We have continuous insight into what the contractor is doing and why.  While we don’t control the subcontracts, we have access to the records, insight to the activities, and, for the larger ones, the responsibility to review and authorize contract placement.  Quite simply in this case, we don’t want large chunks of our money to go to shoddy subcontractors.  Thankfully for us, our prime contractor does an excellent job of ensuring that their subcontractors and vendors are up to snuff thereby helping to ensure their own success.  All of this is on a day-to-day basis.

We also have monthly project reviews where we assess technical progress coupled with business considerations using a system called “earned-value management” or EVM.  EVM is a budgeting and scheduling tool that continuously charts progress and expenditures against the approved plan.  With EVM, the contractor can identify, and we can see, potential problem areas within the overall project down to the detailed task level, if necessary.  With an effort as extensive as developing a rocket engine, you cannot expect everything to go smoothly at all times so you don’t use EVM as a scream-and-yell tool for punishment.  Rather, it is properly used as a way to re-prioritize allocated resources, to prioritize the application of reserves (if available), or to initiate additional management oversight or whatever else might be necessary to address the problem.


 And, finally, we have an extremely formal process for contract oversight built right into the federally regulated procurement process.  It is called, simply, “performance evaluation.”  On a regular basis, a formal report is compiled and submitted wherein the contractor is evaluated on a range of established criteria spanning technical, business, and managerial considerations.  A board of high-ranking officials reviews this report and the contractor receives what is the equivalent of a report card.  Depending on the structure of the contract, this report card can determine how well the contractor is paid and, therefore, it can have significant repercussions from a corporate perspective.  In other words, this becomes very serious stuff.

The key thing to remember at all times when engaged in contractor oversight – and I suspect that this is just as true with building a house as it is for building a rocket engine – is that you are successful only to the degree that everyone is successful.  While there may be a momentary sanctimonious thrill in identifying that the plumbing subcontractor is an unreliable scoundrel, if you don’t actually find a way fix the problem in an efficient manner, then your house is still going to have lousy plumbing or you are going to go broke with cost overruns or, possibly, both.  Thought of properly, the relationship between the entity spending the money and the entity doing the building is a form of partnership.  Yes, the different parties have different fiduciary responsibilities and long-term objectives, but you want a good house and your general contractor wants to be recommended for potential future work for your friends who might be building their own houses in the years to come.  So a good final result is in everyone’s best interest.

No, the analogy between building a home and building a rocket engine is not perfect, but it does help in understanding the important aspects of identifying requirements, establishing an organization of an architect, a general contractor, and an array of capable subcontractors, and the issue of good performance surveillance.  People often ask me what I do for a living and I say, “I’m part of management for rocket engine development,” which doesn’t really say much that is helpful.  If I had the time, however – and if they were truly interested – I’d explain the job of the Liquid Engines Office to them with this analogy of building a house.  I would also tell them that my job is also sometimes an adventure.





LEO Extra: Coming to a Resolution / STS-104 Part 3

I am starting the writing of this article on the first day of the month so, “rabbit, rabbit, rabbit.”  There, now that’s done with (a silly, harmless old-world superstition shared with me by my mother – look it up!).

rabbibtMeanwhile back at the ranch, Auntie Jane fell down a well and Jake the hound dog led the sheriff to Granddad’s moonshine still in the barn…

moonshineOkay, okay, I’ll stop stalling.  This is the third and final article about the in-flight anomaly on STS-104.  In the first article, we talked about how everything apparently went so well for the first launch of the Block 2 configuration of the Space Shuttle Main Engine.  This was the culmination of over a decade of incremental work to transform the SSME into a safer, more reliable engine.  In the second article, we talked about the “uh-oh” moment when we found pressure rises that, at first, just seemed a little unusual and then, upon further research, were found to be extreme as compared to what we had seen before in the flight program.  In this article, we will discuss why this anomaly happened, why we missed predicting that it would happen, and what we did to keep flying safely beyond that point.

Why it happened, part 1: the rotor
Did you ever notice how much coordinated effort it takes to slow down an airplane as you’re landing?  They drop the landing gear to increase drag.  They slow the engines and drop flaps and tip the nose up, getting more lift but at the price of additional drag.  And when you’re finally on the ground, they throw up more flaps and sometimes use loud thrust reversers and, lastly, they use the brakes on the wheels.  All this effort is necessary because you’ve got this great big thing with lots and lots of energy and momentum and you’ve got to bring it to rest.  That’s a great deal of energy to dissipate.


Now, think about the rotor in the SSME high-pressure fuel pump.  After engine shutdown, it’s spinning down, decelerating, from over 30,000 rpm to zero in just a few seconds.  Just like the airplane in a relative sense, that’s a great deal of energy to dissipate in a very short time.  So, where does the energy go?  Remember, short of Einstein’s relativistic effects (not relevant here), energy is neither created nor destroyed.  It is only transferred.  When we say that was dissipate the energy from the rotor, what we really mean is that the energy comes out of the rotor and into, well, um … what?

Some of the dissipation is due to mechanical friction.  But we’ve got really, really good bearings in that turbopump, and there aren’t any brakes (i.e., the energy dissipation tools used for your car), so friction is a very small piece of the process.

The only other thing that you have is the fluid in the pump, the residual liquid hydrogen left there after shutdown.  Think again about the plane landing.  Many of the things done to slow the plane rely on drag, which is basically relaying on putting energy into the working fluid, i.e., air.  We do kind of the same thing with our working fluid, the residual hydrogen.  That’s why we close the valves the way that we do and effectively lock up the fluid in the line rather than just let it all drain out of the pump portion of the turbopump.  Because the turbine end of the turbopump is no longer being powered and because the rotor is continuously transferring energy to the fluid, the result is that the rotor slows down.  Ta-da!  The plane has landed and the rotor is slowed.

But what happens when you put energy into a fluid?  In the case of the landing plane, the reservoir of Earth’s atmosphere is so huge that there’s basically no effect.  But in the case of the pump, it’s more like the boiling, covered pot on the stove discussed in the previous article.  It is a fixed, trapped volume into which you are putting energy.  Thus, the pressure rises.

“Eureka!” you say. “We have identified the source of the STS-104 pressure rise.”  Well, sort of.  We always expect a pressure rise.  That’s part of the process.  If you go back to the previous article, you’ll see that there was a pressure rise for all three engines.  It’s just that the pressure rise for the Block 2 configuration engine was so much greater.

Remember back to the first article when I was explaining that the Block 2 fuel turbopump was safer, in part, because it was heavier?  We were able to allocate more weight to the designers and so they used that extra weight (along with many lessons learned) to increase the safety margins.  A heavier rotor means that when it’s spinning, it has more energy than a lighter rotor spinning at the same speed.  Thus, a heavier rotor should mean more energy dissipation/transfer and therefore more pressure rise.

No, this is not a SSME/RS-25 turbopump. This is a commercial multi-stage pump, but a very good picture showing the multiple impellers (in this case four towards the left end) all attached in series on the rotor shaft. The SSME high pressure fuel turbopump uses the same principle. Credit: Dickow Pump Company
No, this is not a SSME/RS-25 turbopump. This is a commercial multi-stage pump, but a very good picture showing the multiple impellers (in this case four towards the left end) all attached in series on the rotor shaft. The SSME high pressure fuel turbopump uses the same principle.
Credit: Dickow Pump Company

“Eureka!” you say. “We have identified the source of the STS-104 pressure rise.”  Again, not quite.  Note that this was practically the same thought path that we followed as we unraveled the STS-104 anomaly.  By this time, we were deep into our analytical modeling efforts.  We had models for the engine transients – in other words start up and shutdown – but we had not predicted this effect to any great detail.  And even when we carefully accounted for the greater mass of the Block 2 rotor, we could not entirely recreate the STS-104 anomaly.  Yes, we did indeed get higher pressures in the line, but not as high as we saw in flight.

Why it happened, part 2: “thermal mass”
Here’s an experiment.  Heat your oven to 350 degrees.  Put potato and a radish, each wrapped in aluminum foil, on a shelf in the hot oven.  An hour later, take them out.  Now, the experiment part is drop them into ice water and observe how much time it takes before you can comfortably pick up and hold each foil-wrapped vegetable with your bare hands.  While I cannot say [confession coming] that I’ve actually done this experiment in preparation for this article, I am quite confident that the radish would cool more quickly.  Why?  Well, there are all kinds of heat transfer equations related to conduction and convection that we could review, but that’s not really necessary.  It’s just common sense.  A hot potato stays a hot potato because it’s a heavy, dense thing.  It’s got what you could call “thermal mass.”  It stores a lot of energy when hot.  Something less weighty like, say, a radish, has less mass and so even if it starts at the same temperature, there just isn’t as much energy to dissipate to bring it back down to temperatures that allow for handling.

radishNow, the high-pressure fuel turbopump is not a potato, but it does have significant thermal mass.  It’s about the size of an automobile V-8 engine.  Put your hand on the hood of your pick-up truck an hour after you’ve parked and you’ll get a sense of thermal mass.  And just as with the rotor, when we went to the Block 2 configuration, we were able to provide for a larger weight allocation for everything as part of the effort to make a safer component.  So, in addition to the rotor, the housing and all of the internal flow-path elements of the pump were a bit more meaty than previous designs.

But how does this relate to the STS-104 anomaly?  That’s really a good question because it’s not really obvious that it should.  If the pump was full of liquid hydrogen for all that time prior to launch and during ascent, then the pump ought to be the same temperature as the liquid hydrogen.  Just like water flows downhill, heat only flows when there’s a difference in temperature.  If the metal is liquid hydrogen temperature and the liquid hydrogen is liquid hydrogen temperature, then you ought to have a thermal standoff with no heat transfer.  Ahhh, but here is where you have to think in terms of other worlds with respect to the notion of “liquid hydrogen temperature.”

Think about water boiling on the stove.  How hot is it?  It’s just about 212 degrees.  If you turn down the burner so that it’s boiling less, it will still be at 212 degrees.  If you turn up the burner so that it’s boiling violently, it will still be at 212 degrees.  At regular, atmospheric pressure you cannot make water hotter than 212 degrees.  If you add any heat to water sitting at 212 degrees, that extra heat will be released by boiling but the temperature remains unchanged.  However, if rather than our normal 14.7 pounds per square inch we increase our pressure up to 1,000 pounds per square inch, then we can heat the water up to over 500 degrees before boiling starts.  In other words, the boiling point is dependent upon the pressure.  I mentioned this in a previous article in this series.

Typical phase diagram for a substance such as, for example, hydrogen
Typical phase diagram for a substance such as, for example, hydrogen

Prior to launch, we chill down the pump to liquid hydrogen temperatures at near atmospheric pressure conditions.  That’s means about 37 to 39 degrees above absolute zero (or more than 420 degrees below zero Fahrenheit).  While the engine is running, however, because we’re adding lots and lots of work into the fluid as part of the pumping up to over 5,000 pounds per square inch of pressure, the temperature of the fluid coming out of the pump is somewhere between 90 and 100 degrees above absolute zero.  Well, if the fluid is up around 90 to 100 degrees, then the metal of the pump is going to be up around that temperature as well after eight and half minutes of power ascent.  So what happens when we shut down?  The spinning pump rotor slows, comes to a stop, and the pressures through the whole system drop.

Think about that water sitting at 500 degrees Fahrenheit and at 1,000 pounds per square inch pressure.  Now imagine that you gradually decrease the pressure back down to normal atmospheric pressure.  What happens?  The water is going to boil.  It is going to release energy as it cools down to the normal boiling point of 212 degrees at 14.7 pounds per square inch.  This same phenomenon happens in the hydrogen side of the engine during shutdown.  The pressure drops during shutdown and this warmer liquid needs to expel energy to get down to lower boiling point temperatures corresponding to lower pressures.  And, as the liquid temperature drops, then there is a temperature difference between liquid and the metal of the pump.  And what happens when you have a temperature difference?  That’s right.  You get heat transfer.

All of this is the normal process.  It’s all expected.  So, what made the STS-104 experience different with the Block 2 engine?  Answer: The hot potato, i.e., thermal mass.  As mentioned, the Block 2 high pressure turbopump was intentionally meatier than its predecessor.  This additional meatiness helped with the reliability and safety of the unit but it also added more thermal mass and this meant more energy being put back into the fluid.  More energy into the fluid translates to higher pressures in the trapped hydrogen between the main fuel valve and the prevalve.

“Eureka!” you say. “We have identified the source of the STS-104 pressure rise.”  Okay, yes, in combination we have.  Simply put, it was the combination of greater kinetic energy from the heavier rotor plus the additional heat due to the greater thermal mass of the pump structure that led to the elevated pressure between the main fuel valve and the prevalve.  And this elevated pressure, in turn, due to the release mechanism built into the prevalve, is what caused the elevated pressure in the 17-inch manifold.  We modeled all this analytically, both at a higher level for the purposes of running through many “what-if” scenarios and also at a very detailed, multi-node level embedded within the accepted transient model for the engines.

Comparison of flight data to model output for STS-104. Model shown here was the first-order model used to examine multiple "what-if" scenarios. The more detailed modeling matched the flight data characteristics to an even greater degree.
Comparison of flight data to model output for STS-104. Model shown here was the first-order model used to examine multiple “what-if” scenarios. The more detailed modeling matched the flight data characteristics to an even greater degree.

“Why didn’t you see this coming?”
So, in gracious and humble gratitude for solving the riddle of the STS-104 in-flight anomaly, the management community for the Space Shuttle demanded an answer to the following question: “Why didn’t you see this coming?”  And, sometimes, this question was asked with slightly more colorful language.  After all, the Block 2 engine configuration had been in development for over a decade, so it really was a perfectly fair question.  And, with the perfect hindsight, perhaps it was something that we could have or should have predicted.  Fine, I’ll accept that.  But here is the biggest part of the reason why we didn’t really think of it: We never saw any difference on the test stand.

When you set up a test stand to test an engine, there are many competing factors to take into consideration.  Obviously, you want to protect the very expensive engine hardware so you include a multitude of safety provisions.  You also want to get useful data from the tests (remember: you only test engines for two reasons – to impress your friends and to get data).  But how do you define useful data?  Most people would say that useful data is defined as data from testing that most looks like and feels like the actual mission.  But you can’t make the test stand look exactly or entirely like the vehicle and, even if you did, it’s not like you can make the test stand fly through the upper atmosphere at Mach 10 during the test.  In other words, the ground test stand will never be a perfect reproduction of the flight mission.  So you make compromises based upon practicality, cost, and safety.

Comparison of flight and test data to model output for Block 2 Engine 2051. Test A2-790 was the acceptance test for this same engine that flew on STS-104. As you can see, on the test stand, there is practically no rise in inlet pressure after shutdown. FYI, tests for non-Block 2 engine look basically the same, i.e., little or no rise.
Comparison of flight and test data to model output for Block 2 Engine 2051. Test A2-790 was the acceptance test for this same engine that flew on STS-104. As you can see, on the test stand, there is practically no rise in inlet pressure after shutdown. FYI, tests for non-Block 2 engine look basically the same, i.e., little or no rise.

On the SSME test stand, the prevalves are higher up in the feed line as compared to the Shuttle orbiter.  This means that the trapped volume between the main fuel valve and the prevalve is much larger.  Also, in the shutdown sequence, the test stand prevalve shuts down later than on the vehicle.  And, very shortly after the prevalve shuts down, we bleed the line since the prevalve is not a fancy flight unit with a built-in relief functionality.  All these things combine to provide a very safe, steady shutdown for the engines in the test stand.  Pressure is maintained, but not allowed to rise very much and we have a larger volume of liquid to keep the pump loaded and to absorb any energy input.  It wasn’t just for the Block 2 configuration that the test stand was different from the vehicle.  It had always been different, as in for nearly 30 years.  Thus, for nearly 30 years the test stand was different from the vehicle and the shutdown data between tests and flight looked different, but the differences were understood, accepted, and, because there had never been an issue, largely ignored.  The Block 2 tests looked just like all the previous configurations on the test stand with regards to the shutdown pressures and so a flag was never raised.  It was only when we got to the different geometry and procedures of flight that an issue appeared.

Okay, now what?
We now understood the issue and we understood why we hadn’t predicted that the issue would come up.  We now had to resolve the issue.  We had a number of possible choices, but the most obvious came from the fact that we hadn’t predicted the issue based upon test results.  So we figured that if we made the vehicle act a little more like the test stand, then we could make the anomaly go away.  We couldn’t move the prevalves or make the feed lines any longer, so we instead recommended delaying the prevalve closure by 2 seconds.  In terms of physics and thermodynamics, what is this doing is allowing more of the energy being released from the spin down of the turbopump to leak upwards, out of the engine, and even back into the External Tank before we lock up the system.  Less trapped energy, less pressure rise, but we still spun down the turbopump safely as shown repeatedly on the test stand.  According to our analytical models, this worked well for the engine and the feed lines.

However, the prevalve closure is just the first step in a series of actions taken by the Shuttle in preparation to separate from the External Tank.  So, if we recommend that the prevalve close 2 seconds later, then the whole sequence has to change by 2 seconds including the disconnection of the External Tank.  That required an assessment from our trajectory analysis friends at the NASA Johnson Space Center.  They had to see if mission performance was impacted or if the External Tank reentry footprint (i.e., the place in the Indian or Pacific Oceans where the External Tank comes crashing down into the sea) was altered in an unacceptable manner.


The trajectory analysis results showed that the changes were acceptable and so, on 5 December 2001, less than five months after STS-104, we launched STS-108, Shuttle Endeavour, with one Block 2 configuration engine.  The results were exactly as predicted.  There were no unacceptable pressure rises.  In April 2002, on STS-110, we successfully launched a complete cluster of three Block 2 configuration engines, on Shuttle Atlantis, and had no post-shutdown pressure issues.  The STS-104 in-flight anomaly was officially considered to be resolved.

Plot showing the data from STS-104 and three subsequent flights in which the 2-second prevalve closure delay was incorporated into the sequence. Notice how the 2-second delay lowered the pressure for all engines and made all of the pressure profiles look similar.
Plot showing the data from STS-104 and three subsequent flights in which the 2-second prevalve closure delay was incorporated into the sequence. Notice how the 2-second delay lowered the pressure for all engines and made all of the pressure profiles look similar.

No, the story of the STS-104 in-flight anomaly will never be made into a movie starring Tom Hanks a la “Apollo 13,” and certainly the stakes that we faced were never that dramatic in finding a resolution (thank goodness).  Nevertheless, it was an excellent example of an investigation in which we used the available data, constructed suitable analytical models, and found a solution to an engineering problem in a manner such that the ongoing sequence of Shuttle launches was never really impacted.  It was an interesting problem and a complete success and I am still, to this day, proud that I had the opportunity to play a role.  [Note, however, that if there are any movie producers reading this and it they want to start casting the parts, Brad Pitt would be an excellent choice to play me.]


LEO Extra: “Huh, what’s that?” / STS-104 Part 2

To begin the last previous article, we discussed water, steam, and ice.  Here, for this article, I want you to think about a boiling pot of water on the stove.  Now, put a lid on the pot.  What happens?  Usually, the lid rattles around, rapping out an irregular rhythm (in my house, that usually gets the dogs barking), and it lets puffs of steam escape with each little wobble.  But what if we made the lid really tight?  Well, in that case, if we kept applying heat, the pressure in the pot could eventually rise high enough that the lid could pop off with a good bit of energy.  If I wanted to exaggerate, then I would say that the pot and lid combo could explode, but that’s probably being just a bit overly dramatic when talking about a normal lid.


Now, rather than water, think about liquid hydrogen.  Yes, I know, LH2 is not something that you encounter in your daily life.  Shoot, I’m in the business of rocket engines and I don’t deal with LH2 much other than on paper (or as mathematically represented in computer models).  But just think of it like any other liquid such that, if it is warmed and starts to boil in a closed container, then the pressure rises.  Something to keep in mind.

Now, let’s get back to STS-104.

I returned from my short vacation in Boston back in 2001 and I got down to work by working with the team analyzing the STS-104 data.  After every flight, we get data from measurements on the engine and the Shuttle orbiter and we effectively reconstruct the engine performance to make sure that everything did what was expected.  Please note that the engine folks are not unique in this respect.  The team responsible for the boosters scours the post-flight data from the boosters (and, on the Shuttle Program, they also examined the recovered booster shells).  The team responsible for communications examines all of the communications data.  The team responsible for the External Tank studies, in detail, the tank data and reconstructs the draining of the tanks as another means for confirming that the engines performed as predicted.  For us engine guys, the initial conclusions were, yes, the engines performed as expected and as predicted.  In particular, the new Block 2 engine performed perfectly.  No obvious anomalies on the engine.  Pop the corks because all was good!

However, during the data review there was one moment of, “Huh, what’s that?”  It was over the data shown in the following plot (this is not the actual plot – I reconstructed it for this article using the original data):


What you have in that plot is data from two Shuttle missions.  STS-098 was a launch of Shuttle Atlantis that took place in February 2001.  STS-104, our launch being reviewed, was the next launch of Atlantis.  So, this is a comparison between two back-to-back missions of the same orbiter and that’s important since the measurement in question is on the orbiter side of the interface rather than on the engine side.  Indeed, it’s not a measurement that usually concerns us much.  This is the pressure in the shared manifold from which the three engines are fed liquid hydrogen.  The time period is after all the engines have shutdown.  The thing that was kind of odd was the pressure rise at right around 11 seconds (please note that the data looks “boxy” for a variety of factors including, primarily, data rate and, secondarily, the fact that we sometimes sacrifice fidelity for robustness depending on relative importance).

But, hey, the pressure rise only about 10 psi.  We’re rocket guys.  We deal with thousands of psi all over the place.  This is a blip.  A nothing.  No big deal.  That was, I have to admit, the initial response.  But, of course, we still have to figure out how or why it happened.  The teams looking at post-flight data are always extremely meticulous since it is often this data that tells you that you’re safe to fly for the next mission.  However, there is still a prioritization of issue in your head.  Our biggest concern for this mission was how the engines, particularly the new Block 2 configuration, performed during powered ascent.  Nevertheless, we did come back and respond to the simple question asked, “Huh, what’s that?”

fuel feed

Above is a simplified, cartoon schematic of the liquid hydrogen feed system from the External Tank to the engines.  The “Disconnect” is, exactly as it sounds, the place at which the External Tank disconnects from the Orbiter.  Remember, the sequence at the end of the powered ascent of the Space Shuttle is that the engines shutdown and then, a few seconds later, the External Tank is dropped off from the Orbiter.  For our purposes here, you can think of the Disconnect as one of three “on/off” kinds of valves in the system with the other two being the prevalves and the main fuel valves on the engines.

From the schematic you can see that the “17-Inch Manifold” (cleverly called that because, well, it’s a pipe that’s seventeen inches in diameter) is in the Orbiter.  As I said, it’s not even part of the engine.  So, why would we get an unusual pressure rise in that manifold after engine shutdown and does it have anything to do with the engines?

The first thing that we did was go back to other launches, both on Atlantis and on other the other Orbiters since they were all configured the same way (though each had slight quirks and differences).  Frankly, we wanted to show, “Well, we’ve seen this before.  It’s no big deal.”  That way we could get back to patting ourselves on the back about our great success.  But what we found was that this pressure rise was not normal.  If we went far enough back in the flight history, we found examples of the manifold pressure rising, but these early launches used a different sequence of shutting down the engines and dropping the tank.  So, more and more, this rise in pressure was looking to be unique and unexplained.  Please realize that this is taking place over a couple of days.  It started slowly, as a “get back to us” action in the data review.  But it really gained momentum with the next series of revelations.

It turns out that the manifold has a relief valve that “cracks” open at anywhere from 40 to 55 psi.  Remember the pot of boiling water and the tight lid?  Well, everywhere in a cryogenic system that you can trap fluid – like, for example, between valves – there has to be a relief valve.  Otherwise, because cryogenic liquid has such a propensity to boil, you have the possibility of, well, “popping the lid.”

Steam locomotive blowing out steam through a relief valve.  Credit: Scott Schroeder
Steam locomotive blowing out steam through a relief valve.
Credit: Scott Schroeder

So, on STS-104, we had a pressure rise in the 17-Inch LH2 Manifold that could have opened the relief valve.  It turns out that it didn’t (based on other data), but it could have.  It also turns out that the relief valve is actually single-string, meaning that there is no redundant, backup pathway should the relief valve fail to open.  In other words, consistent with the broad redundancy philosophy pertaining to human space flight, we should not rely on the relief valve to guard against over-pressurizing the manifold.  That relief valve ought to be only a tool in the case of an emergency.  What we need to do is avoid unintended pressure rises in the manifold.  Thus, we needed to figure out why the pressure rise happened.  We could not pooh-pooh the situation.

Now, before anyone thinks that we nearly had a catastrophic event on STS-104, please let me assure you that this was not the case.  We just had a 10 to 12 psi pressure rise in the manifold.  The structural limit of that pipe allowed for another fifteen or twenty psi of pressure before any analytical limits would have been reached and there’s additional margin designed in over that. However, also in keeping with the philosophy and thought processes for human spaceflight, we can’t just look at the 10 psi increase and believe that this represents the worst case.  If you don’t know why it happened this time, then you certainly don’t know that it couldn’t be worse next time.

What does this all mean?  Well, it means that we started with “Huh, what’s that?” and, over the course of a few days, got to the point of formally declaring the situation an In-Flight Anomaly (IFA) and a possible impediment to future flights.  Let me tell you, there is not much that is more motivating than knowing that your subsystem could be what holds up the next launch simply because you can’t figure out what’s going on.  Oh my, things got busy.

Here was another piece of data that jumped out as odd.  Note that we saw this right away during the data review, but we didn’t think too much of it at the time because, by itself, it didn’t seem to endanger anything.  In the plot below, I’ve shown again the pressure in the 17-Inch Manifold after engine shutdown and along with that I’ve plotted the engine inlet pressures.  These measurements are below the prevalves in the schematic shown up above.


 The thing that was obvious was that the inlet pressure related to the Block 2 engine was higher than the two inlet pressures for the non-Block 2 engines.  It wasn’t until the whole “oh-my-goodness” momentum of realizing the manifold pressure rise was important did we go back, plot up every flight ever flown, and discover that not only was this inlet pressure high, it was the highest inlet pressure ever seen.  Now things were getting interesting.

 Let’s pause for a moment and talk about the process shown in the plot above.  Remember, the time marked as zero on this plot is the point at which the engines are commanded to shut down.  The main fuel valves closing past five seconds marks the end of liquid hydrogen flow through the engines.  Next, the prevalves on the Orbiter are closed.  That traps liquid hydrogen between the main fuel valves and the prevalves.  Trapped fluid with any energy input is like the boiling pot of water and so the pressure rises in that volume as you can see on the plot.  A pressure rise to some degree is expected and normal.  Next, the disconnect valve is closed between the Orbiter and the External Tank.  Here again, you trap a volume of fluid.  But usually, except for STS-104, there isn’t much if any pressure rise in this volume.  Why?  In part, it’s because it’s a much larger volume with, therefore, more compliance.  But also because there’s less energy input (that’s a key point to put away and remember for the next article).  Once the disconnect valve is closed, that allows for the tank to be separated from the vehicle.  Then, after the tank is safely away, dump valves and bleed valves are opened to start draining the system.  Here you see the pressures drop because the fluids are allowed to leak out in a controlled manner.  Thus, the whole thing is a process of shutting down flow, systematically locking up two volumes of fluid, letting go of the tank, and then releasing the pressure and fluid in the trapped volumes as part of the initial preparation for having the Shuttle vehicle stay on orbit.


So, this was where we were in terms of what we knew and the data that we had a few days after the launch of STS-104.  Our task was to understand and reconstruct why the In-Flight Anomaly happened.  Let’s review what we know at this point just as if we were working through the STS-104 investigation together:

  • After engine shutdown, the pressure rose in the 17-Inch LH2 Manifold and we figured out that such an occurrence, while not terrible by itself, was indeed something that we should avoid
  • During roughly the same time period, the pressure in the lower branch of the LH2 feed system, the section below the prevalve feeding the new Block 2 engine, also rose and did so to a level higher than for any previous Shuttle flight
  • This was the first flight of the new Block 2 configuration which featured a new high-pressure fuel turbopump design
  • As discussed in the previous article, the Block 2 high-pressure fuel turbopump was beefier and more robust than the previous version

In addition to this, I will give you one more key piece of information (something that I learned as part of the original investigation): The prevalves are designed such that they have their own internal relief system.  If the pressure in the volume trapped between the prevalve and the main fuel valve of the engine reaches anywhere from 35 to 45 psi higher than the pressure in the 17-Inch LH2 Manifold, then the prevalve allows leakage “backwards” across the valve.  The differential pressure where this relief function kicked in was different for each prevalve, but it was an intentional design feature that influenced the STS-104 anomaly.

So, that’s it for now.  In the first article, I gave you some background.  In this article I told you about the observed data and how it became to be understood that this was an issue rather than just an oddity.  In the next and final article on this subject, I’ll tell you about what determined was the actual scenario, the models that we developed to reconstruct the anomaly, and how we fixed the problem such that we could continue to fly the Shuttle for almost another decade.

I leave you with two questions to contemplate until you hear from me again:


LEO Extra: A Little History / STS-104, Part 1

To start this post, I want you to think a little bit about water.  As everyone knows, water is a liquid.  In fact, if you think about the word “liquid” just for a couple of moments, you probably had an image of water in your head.  Water is liquid; liquid is water.

Okay, but what about ice?  “Yes,” you say, “but ice is ice and water is water.  That’s why we have two different words.”  That’s right.  Our language has been made to fit our experience, but we all know, of course, that ice is just frozen water.  And, of course, we know that steam is water made hot enough to boil and become gaseous.  We have three different words for three different states of the same chemical stuff: H2O, two atoms of hydrogen bound to a single atom of oxygen.  On our planet, at typical, habitable temperatures and given our atmospheric pressure at the surface where we live, water is liquid and the other states – gaseous and solid – are generated from there as deviations from “normal conditions.”  And that’s good since we otherwise wouldn’t exist as a species and, more importantly, nobody at all would be reading this blog.


But let’s suppose that we lived on a planet where the typical ambient temperature was, say, 300 degrees Fahrenheit, but everything else was mostly the same.  Please ignore, for a moment at least, all of the other issues arising from such a scenario and imagine what our sense of “water” would be.  All of the H2O that we would know of in our everyday lives would be gaseous.  The only way that we could get liquid water would be to chill some of the gaseous stuff down far below “normal” ambient temperatures, down below its boiling point.  And making solid water would require chilling even further, roughly to 270 degrees below “normal.”  If this was our world, then we probably wouldn’t have three separate words like “ice, water, steam.”  We would likely instead talk in terms of “solid H2O, liquid H2O, and gaseous H2O.”


Now, let’s suppose that rather than a hot planet, we lived on a planet with much higher atmospheric pressure (again, with everything else pretty much like it is on Earth).  In that case, we’d still have a general sense of water, steam, and ice, but our transition from liquid to gaseous would occur at a higher temperature.  Suppose that our atmospheric pressure was 1000 pounds per square inch (as opposed to our pleasant 14.7 psi on the surface of Earth), then our water wouldn’t boil until it reached well over 500 degrees Fahrenheit.  That’s 300 degrees higher than what we’re used to on Earth.  Kitchen stoves on this hypothetical, high-pressure planet would be using some serious energy just to make a bit of pasta for dinner.

So, what does all this supposing have to do with rocket engines?  Well, it has to do with thinking about the cryogenic fluids that we use for propellants.  When dealing with cryogenics, you have to think in terms of these topsy-turvy situations where things “boil” at four hundred degrees below zero and, in a rocket engine where we produce very high pressure situations, that boiling point in terms of temperature can be entirely situational or, above a certain pressure point, completely go away.  And, specifically, this is the background that you need to understand the curious case of STS-104.


STS-104 was a Space Shuttle mission that launched in July 2001.  It was a mission to Space Station and the orbiter was Atlantis.  Although I’d worked on the preparations for that launch and had participated in the flight readiness review at the engine project level, I was actually in Boston when it launched.  My wife had a veterinary conference and I tagged along because, well, Boston is just a really cool city to visit.  I remember sitting in the Boston Common, on a park bench, reading the newspaper article about the launch with some satisfaction knowing that I’d been involved in the process (along with, of course, hundreds and hundreds of other folks who, like me, justifiably took pride with each and every launch).


STS-104 was a special launch from a rocket engine perspective.  It was the first launch that included one example of a Block 2 Space Shuttle Main Engine.  This hardware configuration for the SSME was more than a dozen years in coming to fruition.  When I’d first started work supporting the Shuttle Program back in 1990, we were flying the Phase II version of the engine.  Gradually, over the subsequent decade-plus, NASA and Rocketdyne and Pratt & Whitney worked together to make that great engine even better, more reliable, safer.  The culmination, through several separate designations, was finally the Block 2 configuration.  Over the years, I had worked as an analyst during many of the stages of its development and, towards the very end, had been a genuine Datadog for the final certification testing of the complete package.  For that final process, we were turning around data reviews day in and day out as we stepped through two 22-test series at a rate as great as five tests every two weeks.  It was hectic but exciting.

The final piece that made the Block 2 configuration complete was the addition of the new high pressure fuel turbopump (HPFTP).  The following is a few fun facts about this remarkable machine:


Both the older turbopump and this new, Block 2 version had roughly the same performance from an engine power balance perspective, but the new one was safer.  That was the whole reason for the development effort.  Why was it safer?  Because over the twenty years preceding that point, we’d learned all kinds of stuff in terms of potential failure modes, effects, and mitigation methods.  Indeed, even the older, pre-Block 2 turbopump was not exactly the same one that had first propelled STS-1 in 1981.  We’d made small modifications all along to ensure that flight was as safe as possible, but the Block 2 design was a complete overhaul, an entirely new component.  Also, some of this additional reliability and safety came from the simple fact that the new HPFTP was stouter than the original.  Due to other modifications across the rest of the Space Shuttle vehicle, we were able to make the engine a bit heavier while still meeting mission objectives.  General Rule:  Give a designer a little more mass margin and he’ll give you larger factors of safety.  This additional mass for the HPFTP will be part of the overall story, so hold on to that fact.


Now, back to the story of STS-104.  When we last left our hero, he was sitting on a bench in the Boston Common, smoking a cigar and reading the newspaper.  He was all smug and pleased with himself in knowing that he had something to do with a positive story in that newspaper and in every other major newspaper across the country and some newspapers around the world.  I don’t think that it’s bragging to admit that it’s a pretty good feeling when something like that happens to you.

It wasn’t until I got back to work a couple of days later that I learned of the anomaly.  At first, it wasn’t even recognized as an anomaly.  It was more of an oddity in the data.  But all oddities have to be explained and as we dug into it we came to realize that it was indeed a real event, not some kind of data glitch, and that it was something we’d not predicted.  And then we figured out that it was something that needed to be remedied or future flights of the Block 2 engine would be in jeopardy as would the entire Shuttle Program.  Coming to resolution of this issue would consume much of the next few months of my professional life, but we did find a solution and we continued flying.  I will tell you more about it in Part 2 of this story in my next posting.

Post script:  Go out to Wikipedia and look up STS-104.  You’ll see there a little note provided about the Block 2 engine and an in-flight anomaly.  It doesn’t say much.  I’ll share with you “the rest of the story” (as the late Paul Harvey would say).

Inside the LEO Doghouse: RS-25 vs. J-2X

Nobody is confused by the fact that we don’t use a Ferrari 458 Spider sports car as a dump truck. Nobody is astonished that a Toyota Prius did not qualify for the Indianapolis 500 race this past May. And nobody whom I know drives a Caterpillar earth-moving truck back and forth from home to work (…but, I have to admit, it might be really cool to try – Outta my way, I’m coming through!).


We’re not confused by these things because most of us have automobiles and we are generally familiar with the notion of different vehicles being designed, built, and used for different purposes. In a number of different articles I’ve repeatedly stressed the notion that form follows function and function follows mission requirements. The mission requirements for a Ferrari are different than those of a Prius or those of a giant piece of mining equipment and so the resulting products are dramatically different.

The same concept of differentiation applies to rocket engines. That’s obvious, right?

On one end of the spectrum, you have something like the F-1 engine used for the Saturn V launch vehicle. It had a thrust level of 1.5 million pounds-force of thrust and a specific impulse of about 260 seconds (sea level). It stood nineteen feet high, at the base of the nozzle was over twelve feet in diameter, and it weighed over nine tons. On the end of the spectrum (at least the spectrum that we deal with within LEO), you have the RL10 which, depending on the specific configuration, puts out less than 25 thousand pounds-force of thrust but has a specific impulse over 450 seconds (vacuum). If you have an RL10 without the big nozzle extension, the engine is just over seven feet tall, about four feet in diameter, and it weighs less than 400 pounds.



Yes, that’s a flood of numbers, but let me make it a bit more graphic. If we wanted to get the same thrust level using RL10 engines as was obtained on the S-1C stage of the Saturn V (which used a cluster of five F-1 engines), then you would need 336 RL10 engines. That would be an interesting vehicle configuration indeed. Alternatively, try to imagine the Centaur upper stage – the typical use for the RL10 – with something as big F-1 hanging off the end. The whole stage weighs less than five thousand pounds (dry) and is just over forty feet long. If you tried to apply 1.5 million pounds-force of thrust to something like that, then in just fractions of a second, the whole stage would be a shiny metal grease spot in space.


So that brings me to the subject of this article. I want to compare the RS-25 and J-2X engines, currently the two primary products of the Liquid Engines Office. These two engines are not as radically different from each other as are the F-1 and the RL10, but the differences are substantial and meaningful. Here is a quickie table that will give you many of the basic characteristics of the two engines:


I know, I know. That all looks like a meaningless, banal listing of numbers. Specifications rarely seem interesting unless or until you know the stories behind the facts. So, let’s discuss the stories.

First of all, they’re both hydrogen engines. Why? Because they both need to have high specific impulse performance at high altitude and in space. The difference between these two engines is that the RS-25 is a sustainer engine whereas the J-2X is an upper stage engine. The RS-25 sustainer mission is to start on the ground and continue firing on through the entire vehicle ascent to orbit. The J-2X upper stage engine mission is to start at altitude, after vehicle staging, and propel the remaining part of the vehicle into orbit. Also, an upper stage engine can sometimes be used for a second firing in space to perform an orbital maneuver. This difference in missions accounts for the difference in raw power. The RS-25 is part of the propulsion system lifting a vehicle off the ground. It needs to be pretty powerful. The J-2X is the propulsion system for a vehicle already aloft and flying quickly across the sky.


The difference in missions is also largest part of the explanation for the different engine cycles used. In past articles, I’ve discussed the schematic differences between a gas-generator engine like the J-2X and a staged-combustion engine like RS-25. The staged-combustion engine is more complex but it generates very high performance. You may look at the two minimum specific values, 450.8 versus 448 seconds, and say that these are not very different, but remember that the J-2X cannot be started on the ground. If we tried to start the J-2X on the ground, the separation loads in the nozzle extension would rip it apart. The RS-25 achieves this very high performance without a nozzle extension because, well, it had to in order to fulfill the mission. Note that a ground-start version of J-2X would have a minimum specific impulse of something like 436 seconds.

Something else that is significantly different between the two engines is their throttling capabilities. The J-2X can perform a single step down in thrust level. This capability can be used to minimize vehicle loads or as part of a propellant utilization system since the throttle is accomplished via a mixture ratio shift. The RS-25, on the other hand, has a very broad throttle range. Why? Two reasons.

First, because during the first stage portion of any launch vehicle ascent, the vehicle experiences what’s known as a “max Q” condition. Perhaps if you’ve ever listened to a Shuttle launch in the past you’ll hear the announcer talk about “max Q” or “maximum dynamic pressure.” This is the point at which the force of the air on the structure of the vehicle is greatest. It is a combination of high speed and relatively dense air. Later, the vehicle will be flying faster, but at higher altitudes, the air is thinner. Thinner air means less pressure (the equation – thank you Mr. Bernoulli – says that dynamic pressure is proportional to the air density and to the square of the vehicle velocity). Thus, to minimize structural loads on the vehicle, the engines are throttled down deeply for a short period of time, and then brought back to full power. An upper stage engine operating only at high altitudes never has to face a max-Q condition. Second, a sustainer has to be big enough to contribute to the lift off of the ground, but at higher altitudes, after the vehicle has been emptied of most of its propellants, with too much thrust you’ll get too much acceleration. If you had no way to throttle back the engine thrust levels, then the vehicle would accelerate beyond the capacity of the astronauts to survive. An upper stage engine does not generally start out with as much oomph so the throttling needs to lessen acceleration loading is not as great.


Lastly, let’s talk about differences in engine control. Engine control typically refers to the parameters of thrust level and mixture ratio (i.e., the ratio of propellants, oxidizer to fuel, being consumed by the engine). When we talk about thrust, we are talking about throttling as discussed above, yes, but also thrust precision, i.e., the capability of the engine to hold tightly at a particular thrust level. When we talk about mixture ratio, we’re generally talking only about the notion of precision (but below in a post-script I’ll tell you a little more with regards to RS-25). Well, what would cause an engine to stray away from a fixed operational condition? Two things: boundary conditions and internal conditions.

The most obvious boundary conditions are pressure and temperature of the propellants coming into the engine. A sustainer engine can see a wide variation in propellant inlet conditions due to variations in vehicle acceleration. This is most dramatic during staging activities. An upper stage engine won’t typically see these wide variations. This is why it was very, very useful (almost necessary) for the RS-25 to be a closed-loop engine. A closed-loop engine uses particular measurements for feedback to control valves that, in turn, control engine thrust level and mixture ratio to tight ranges. The RS-25 holds true to the set thrust level and mixture ratio regardless of propellant inlet conditions. The J-2X, on the other hand, is an open-loop engine. The thrust and mixture ratio for J-2X will stray a bit with variations in propellant inlet conditions. Note that this “straying” is predictable and is built into the overall mission design. Because the upper stage engine won’t see the same wide variations in propellant inlet conditions, this is a plausible design solution.


The different control schemes for the two engines are also the reason as to why the noted thrust and mixture ratio precision are different in the table above for the RS-25 and the J-2X. Every engine runs slightly differently from firing to firing. These are usually small variations, but they are there. This is part of the “internal conditions” factor in terms of an engine straying from a fixed operational condition. A closed-loop control engine can measure where it is with regards to thrust and mixture ratio and make corrections to accommodate and compensate for slightly different internal conditions. An open-loop engine like J-2X cannot make these accommodations and so it will have a wider run-to-run variability even if everything else remains the same.

Note that we could have made J-2X a closed-loop engine. We made the specific decision to not go that way based upon a cost-benefit analysis. Simply put, closed-loop is more complex and, therefore, more expensive to develop and implement. We conducted a trade study, in conjunction with the stage development office, and decided that the benefits in overall stage performance did not justify the additional development and production cost. For RS-25, given its mission, it really had to be closed-loop from a technical perspective to enable the Space Shuttle mission. Plus – thank goodness for us today – the RS-25 control algorithms are validated and flight-proven as we head into the Space Launch System Program. That’s a nice feature of using a mature engine design.


So, that’s a top-level comparison of our two engines that we’re managing for the SLS Program. They have a number of common features, which is not surprising given that the SSME design grew out of the original J-2 experience forty-some years ago and the J-2X was developed, in part, with thirty-some years of SSME experience behind us. But they are also quite different machines because they were designed for different missions. No, this is not a case analogous to the comparison of a Ferrari and a dump truck. It’s more like, perhaps, a Bugatti Veyron and a Lamborghini Aventador. Each is just a remarkable creation in its own right (…and I’d probably be reasonably happy with either in my garage…).


Post-Script. A quick note about mixture ratio control and RS-25. You will note that the mixture ratio is shown as a range in the table of characteristics up above. The RS-25 can be set to run at any mixture ratio within that range. This is a nice accommodation for stage design efforts today as part of the SLS Program, but that’s not why the range exists. The original design requirements for the SSME included not only the provision for variable, controllable thrust level in run, but also for independently variable and controllable mixture ratio during engine firing. This fact, in turn, explains the rather unusual engine configuration of having two separate preburners, one for the fuel pump and one for the oxidizer pump. I’ll tell you why.


Think back to basic algebra. Remember when you had to solve for a number of variables using several equations. The mathematical rule of thumb was that you had to have as many independent equations as there were variables or else you could not arrive at unique solutions for each variable. The same principle is applicable here. With two separate, independently controlled preburners (and therefore independently controlled sources for turbine power), you can resolve to independently control two output parameters, namely thrust and mixture ratio. That’s pretty cool. But here’s the interesting historical part: We never actually used the shifting mixture ratio in flight. As the vehicle matured, it was decided that the mixture ratio shifting capability was not needed. But the design and development of the engine was too far down the road to backtrack and simply. Thus, we have a dual-preburner, staged combustion RS-25 engine.

Post-Post-Script. A number of years ago as part of the SSME project, for some specialized development testing, we did actually invoke the capability to shift mixture ratio in run on the test stand. So we have demonstrated this unique capability on an engine hot fire. There just wasn’t ever any reason to use it as part of a mission. Interesting.