I enjoy cooking. Most people think that when I say that, it’s because I’m an engineer by training, that I like cooking for the structured notion of a recipe and exactly measuring things out and the chemical precision of mixing that with this, at this speed, under these conditions, with these implements, and then forming it all together with a specified heat input over a given time using appropriately sized and shaped pots and pans optimized for uniform heat transfer, blah, blah, blah, blah…
But, that’s way, way off from the truth.
Actually, I like to cook things that allow for, let’s say, “significant organic creativity.” I make a mean vegetarian chili, but you can be sure that it will be different every single time that I make it since it’s always from memory and my memory ain’t what it used to be. I wing it. And that’s fun. And even though it’s fun and even though the details vary slightly, it’s been good every time (so far). The worst side effect that I could attribute about any particular version might be a bit of heartburn (properly mellowing and blending habanero peppers is an imprecise art form I have not yet consistently mastered).
So, what does my free-form chili cooking this have to do with J-2X? Believe it or not, I want to talk about one of the adjectives that we frequently apply to the J-2X engine: “human-rated.” What does that mean? We use that term (or the older, less politically-correct formerly used term “man-rated”) all of the time and, for the most part, those of us within our little clique understand the general context of its meaning. But if you asked any of us to explain, you’d likely get a wide variety of different, complex, and mostly correct yet often partial answers. I am no genius and, despite all odds, I will do my best to provide a reasonably complete framework for a definition so as to help you better understand the J-2X engine.
And, it will come back to my cooking analogy. Really.
First, we need to recognize that there is really no such thing as a “human-rated rocket engine.” That is shorthand terminology that ought to be written out as: “a rocket engine that could be suitable as part of an overall, human-rated launch system.” Think of it this way: Let’s say that you had a total junker of a car but you installed one perfectly pristine, top-quality piston. Do you now have a good car or do you still have a junker? You’d still have a junker, of course. Or, let’s say that you had a really nice car but all of the spark plugs were corroded, eroded, and barely functional. Do you still have a nice car? Well, maybe the paint job is pretty and the stereo sound is clear, but it’s not going to get anywhere quickly, reliably, or efficiently with bad plugs. The point is that no single element of something as familiar as an automobile makes it complete and good and, in an analogous manner, no single element of something as large as a launch architecture is, in itself, human rated. The whole system is rated for human spaceflight because the system as a whole, as well as its constituents such as the J-2X, meet certain standards and processes that we’ll discuss below. We call the J-2X “human-rated” as a shorthand way of saying that it could be part of a human rated architecture consisting of the rest of the vehicle, ground operations, mission control, and exceptionally well trained ground and flight crews, etc.
Second, let’s think about the adjective term “human-rated” itself and its definition. What does that mean? It means simply this: the estimated risk is acceptably low so that we can responsibly decide to put human beings into the vehicle for launch. Again, we can relate this to automobiles. When you drove to work today, you took a risk. Unfortunately, auto accidents happen on the roads and highways and, more unfortunately, despite all of the protective apparatus built into our cars, people do sometimes get hurt in these accidents, or worse. But you accepted that risk and drove to work anyway. You judged your auto to be sufficiently safe. You judged that the roads were well paved and properly marked, that the police were properly monitoring bad and endangering behavior on the roads, and that the weather was clear enough to allow for safe operation of your vehicle. Thus, your “drive-to-work system” was, today, according to your judgment, “human-rated” for you. You weighed the risks — consciously or subconsciously — and decided to accept these risks and make the trip.
Spaceflight is ten thousand times more complex than driving to work, but the rationale is entirely analogous. The “fly-to-space system” (note again it’s a “system” not just a vehicle) is “human-rated” when we judge the risk to be acceptable in light of the potential rewards. The important and fundamental point is that, in the end, it is a judgment. Sometimes, for example, we accept more risk because we judge that the potential rewards are that much more significant. Think back to the early days of human spaceflight. I can guarantee that there is no way in heck that we would today put an astronaut into some of those early vehicles. We would not today consider those early systems to be human-rated by our current standards. But at that time, we as a nation accepted the risk and, by the way, achieved extraordinary milestones. Today, our objectives and potential rewards are different and so our judgments with regards to risk are accordingly different.
So, if it’s all just a matter of judgment, then doesn’t that mean that there really is no such thing as “human-rated”? No, I would strongly disagree.
Here is where I get back to my cooking analogy. While my chili may have slightly different constituents each time that it’s made, and while it might taste a bit different each time, there is no question as to whether it is chili. I use my expert cooking judgment to combine the essential ingredients into a recognizable and tasty product (with or without subsequent heartburn). When we talk about an engine being “human-rated,” we too are not basing that judgment upon a fixed recipe. We are basing it upon a combination of essential ingredients and expert judgment.
If you’re wondering whether NASA maintains some kind of formal recipe for human rating, I refer you to NASA Procedural Requirements (NPR) 8705.2, revision B (effective May 2008), “Human-Rating Requirements for Space Systems.” While this document is helpful, in a general sense, with regards to what technical and programmatic areas to consider, it is written at a very high level, i.e., at the “fly-to-space system” level. As such, it does not offer a great deal of rocket-engine-specific information. This, in my opinion, is exactly as it should be. The actual making of the chili should be left to the expert cooks. Even NPR 8705.2 makes it quite clear that the intent of the document is only to establish a framework within which “human rating” takes place. It is not intended to be a step-by-step recipe book for the many, many diverse parts of a human spaceflight system.
What then are the essential ingredients for a human-rated engine? Not surprisingly, the answer can be thought of as somewhat following the life cycle of an engine development project.
Design and Development
Specific technical requirements — There is a small handful of specific technical requirements that effectively flow down from NPR 8705.2B and impact the engine design. One is the requirement that, where appropriate and where it can be shown to increase reliability and safety, we should use redundant systems. On the J-2X, the clearest manifestation of this is the use of an engine controller with two channels. Should one channel fail (as even heavy-duty computer systems sometimes can), the other channel can take over and continue safe operation. Another specific requirement at the system level is that there exist abort systems that allow the crew to escape from a bad situation on the vehicle. This requirement decomposes to a requirement on the J-2X for a redline health monitoring system that shuts down the engine in the event of an imminent failure and notifies the vehicle of this shutdown. This thereby allows the crew the opportunity to perform an abort.
Design, construction, workmanship standards — Not surprisingly, we don’t start from scratch every time that we sit down to design something. We know how to do things. We have lessons learned. We have rules of thumb. And, at the top of the list, we have standards. These are specialized requirements documents that focus on specific, narrow technical areas. For example, NASA-STD-5012 tells you what you should do for the structural design of a rocket engine. It lays out the essential analyses to perform, the way that the environments should be evaluated, and what factors of safety are appropriate. For J-2X, we had over thirty different standards that were (and are) part of the requirements imposed upon the engine design details, design processes, fabrication processes, and testing scope and procedures.
Even here, however, after you impose a standard you have to acknowledge the fact that there can exist more than one way to do things and do them safely. For example, on J-2X we imposed a structural design standard that, at a lower level, imposed a standard for how fasteners (i.e., bolts and nuts) are properly lubed and torqued. In order to investigate this issue, we set up a mini-test program to better understand the results from the different methods. It kind of sounds silly, but fastener torque is extremely important in high-pressure systems and proving that the contractor process was equivalent and safe could save us money in the long run since it is a standard procedure for them. So, we had a guy follow the procedures several times and we measured the strain induced into a series of bolts by the applied torquing method. The measured strain was converted to applied force and this thereby validated the procedure. Across the spectrum, we had a number of similar examples where we interpreted the technical intent and purpose of a detailed requirement and, working with our contractor, found the best way to comply.
System safety program — As an engineer, the question foremost in your mind is always, “How can I make this thing work?” Without that mindset, we would never get anywhere. However, when dealing with something as complex and as potentially dangerous spaceflight, you must go beyond this level of thinking and must also continuously ask yourself, “What could go wrong with this thing and how do I mitigate that potential as much as possible?” In the most basic sense, this is the motivation for developing a system safety program. As part of the engine design and development process, you look at this issue from two directions.
First, you look at the piece-part level and ask, “What could break, how or why, and what would be the effects?” That’s a reliability analysis. You look at all of the pieces and figure out what circumstances could result in something not working as intended. Could the design be mistaken because we didn’t understand the loads? Could the loads go off nominal because of some unusual flight situation? Could the manufacturing of that piece go awry so that you don’t have the intended design margins in the actual, physical part? And, for all of these questions, you have to provide answers as to how best to ensure that the part won’t actually break during operation.
Second, you start from the other end. You start with the grim notion that you’ve failed and that the crew didn’t make it. From there you work backwards and figure out how and why that situation could take place. This process grows into a tree of circumstances and possibilities and is called a hazards analysis. Was it an explosion? If so, where did the fuel and oxidizer and ignition source come from? If the fuel came from tank, then how did it escape? Was it instead something having to do with navigation? Or maybe there was a weather-related issue, perhaps, say, lightning?
Obviously, in many places these two assessments eventually meet in the middle. The one starts at the bottom and works upwards. The other starts at the top and work downwards. When they meet, then you know where throughout your system are your critical points. In some cases this drives design features, special inspection requirements, or, for example, in the case of lightning protection, the design and construction of a launch pad system for dealing with the hazard. This overall effort allows you to prioritize your efforts to ensure safety and, in the operational phase, potentially apply greater attention prior to committing to launch.
Test and Evaluation
Structured verification planning and reporting — Believe it or not, we don’t march into an engine test program all willy-nilly and make a bunch of smoke and fire just for the sake of impressing our friends. We do it to generate and collect data. The data that we collect largely goes towards the systems engineering endeavor known as requirements verification. Verification is defined as the process of demonstrating that the product design — in our case an engine — is in compliance with imposed requirements. Verification can, and does, take a number of forms. Testing is one form. Analysis and inspection are others.
Note that the “structured” part of the “structured verification” title above is a key consideration. You must lay out plans saying, “Here is my requirement and here is what I plan to do to prove that I meet it.” Then, based upon peer review of experts, this plan can be approved or modified. This is an essential part of the whole judgment aspect of human rating. If I demonstrate that I meet the requirement with one engine on one test, is that good enough? If not, how many engines or tests do I need? Or, if it’s verification by analysis, do you agree with the analysis methodology that we propose to use? Do you concur with the assumptions and the simplifications inherent in any analysis method? The whole process, when properly approached, has the flavor of the classic scientific method. The hypothesis is that the product meets the requirement and then you set out to prove that hypothesis.
Smart people with backgrounds in mathematics inevitably jump into the conversation here and declare the supremacy of statistics. Using statistical analysis, we can determine how many samples and tests are necessary to achieve a mean and variability assessment at a given confidence level. Unfortunately, as good as those methods might be, we can never come close to affording the kinds of programs that a purely statistically based assessment would suggest. Maybe back in the day we could afford to build and test 100 engines before we’re ready to fly, but today our constraints are to accomplish the same level of risk mitigation with an order of magnitude fewer samples. We have to be wiser and more efficient, and yet still have sufficient confidence to declare that the design meets its requirements.
Test, test, test, and then test some more — Now, after having discussed a fundamental motivation for testing engines, i.e., requirements verification, you have to get down to the nuts and bolts of the issue. You must test and you must do it a lot. Yes, “a lot” is not what you’d call a scientific term, but it can be decomposed. “A lot” means that you cover your verification plans in terms of samples and repeat examples. It means that you push things beyond normal operation to prove margins. You test longer — both single run and cumulative on a given engine, both starts and seconds — than any flight engine could possibly ever see. And throughout this process, you continuously learn things that you didn’t know that you didn’t know. While it is theoretically possible that we could design an engine, put it into test, and find that we’d properly characterized every environment and every engine response to those environments, but I’ve never seen such a case and nobody that I know have ever heard of such a thing. Engine testing is always an education.
The other aspect of testing that is sometimes categorized separately is teardown and detailed inspection of the hardware afterwards. If you predicted that something wasn’t going to crack and, upon teardown, you find a crack where it shouldn’t be, then you’re not as smart as you thought you were (a phrase I’ve used before). If you tear down and find that something was rubbing in a valve or a turbopump, then that might be an issue. Or, instead, it might have been planned that way. You look for discoloration that might suggest unexpected operational conditions or potential changes in material properties. You check dimensions of everything to make sure that you didn’t deform pieces or possibly lose material that was consumed by the engine. Thus, while you collect lots and lots of data during the engine tests, it is also the data that you collect after the testing is complete that contributes substantially to your understanding of the design and its safe operation.
Quality processes — Twenty-some years ago, the Ford Motor Company had a motto that they used in advertising: “Quality is Job One.” With all due respect to that venerable motor company, those of us in the rocket world have known this for a long, long time.
When we certify an engine design and say that it is “human-rated,” that is a contingent description. It is contingent upon future flight engines being produced in the same manner and to the same detailed workmanship standards as the design that you certified. That means that the fabrication and testing processes are the same, the materials are the same, the people doing the work on the pieces have had the appropriate training, and that the finished parts have been scrutinized to the same inspections and inspection standards. And, if things can’t be exactly the same (for example, vendors can change over time), then you must have a process in place to assure equivalence between what you had before and what you’re going to use new.
Also, should something go awry during the manufacturing or assembly of any part — and things always go awry to some degree at some point — you need to have processes in place to identify what went wrong, how to avoid that issue in the future, and what to do with any hardware that was exposed to the issue. Can you fix it and still meet your requirements and drawing specifications? Or, do you have to scrap the part because it can’t be saved?
These considerations are all part of a good, solid quality system.
Configuration management — The first cousin of quality assurance is configuration management. While it sounds like a simple premise, this discipline deals with making sure that the exact, particular pieces on the vehicle are the exact, particular pieces that you intended to put on the vehicle. This means, for example, that every bolt on the engine is suitable for a flight engine. No, not every bolt has a serialized part number, but they are segregated by lots. Lots intended for flight usage are subjected to a stringent quality processes and must, therefore, be kept separate from any similar-looking bolts that might not meet the high standards for flight. Plus, of course, we track throughout their lives the history of our serialized assemblies like turbopumps, combustion chambers, nozzles, ducts, lines, controllers, valves, etc., along with their associated documentation. And engine is composed of thousands of parts and, one way or another, we track them all.
The combination of a good quality assurance system and a good configuration management system guarantees that what you have delivered and put on the launch vehicle is exactly what it is advertised and intended (and needs) to be.
That’s it. Those are, in my opinion, the key ingredients for human rating.
So, getting back to cooking. In order to make vegetarian chili, you need tomatoes, beans, and chili powder. That’s it. But chili made with just these ingredients would be terrible. I add peppers (of multiple varieties) and onions and garlic and other spices. Corn can add a nice sweetness. Sometimes I sauté chopped portabella mushrooms and toss them in. Beyond that, I’ve been known to add all kinds of oddball stuff including, once, green beans. And, in the end, it’s good. I promise. That’s because I’ve made it probably thirty or forty times over the years and therefore I am a subject matter expert (within my tiny culinary world). Solid, well-defined ingredients and expert judgment inform my chili.
In order to have a “human-rated” rocket engine, all of the topics that I mention above represent the key, essential ingredients: (1) a few, specific human-rating design requirements, (2) a set of established design, construction, and workmanship standards, (3) a thorough safety program, (4) a structured verification process, (5) system testing campaign, (6) a solid quality assurance system, and (7) a reliable configuration management system. They are all necessary. And certain bounds, limits, or standards can be established (and are documented) for all these various disciplines and undertakings, but an exact, repeatable, or universal, step-by-step recipe is extremely difficult to conjure up. Just like my chili, the details of how, when, and why an engine is “human rated” fall within purview having good key ingredients and then applying expert judgment.
5 thoughts on “J-2X Extra: Human-Rated Chili”
I appreciate the time and effort that you are doing to explain the process of developing and testing the J-2X engine to the public. We are not rocket scientists but your explanations are very beneficial to help us understand the work that you and other engineers are doing to improve the reliability, safety, power, and efficiency improvements of the rockets that we are going to be using for future space travel. Thanks for your dedication!
I echo John A. Thank you for all this, especially your answer to my question about clustering a little while back. I apologise for not responding earlier, my life suddenly got very busy very quickly.
Why not “crew-rated”, gender-neutral and you save two letters and a syllable over “human”. That toner mounts up you know 🙂
As someone with a Bachelor’s degree in Maths and Stats, I would hope that anyone educated enough in the subject would understand that establishing reliability through sampling would not be practical in this case. Personally, I would have thought that Bayesian analysis would be a lot more useful, especially for safety ratings.
Also (excuse me for a moment while I switch hats) as an IT professional, I am intrigued by your apparent lack of reference to computer modelling. I would have thought that a vital component of safety rating would be the ability to demonstrate that your computer models of how the engine would react to various conditions and faults indeed corresponded to reality. Perhaps this is all subsumed under the other headings and I’m just tuned to think of it as something distinctive due to my own speciality.
So what’s the A-3 test stand going to allow to be improved, that won’t be possible on the current stands? So many years & work has been done on the current stands, it’s hard to imagine the thing isn’t nearly done & the A-3 can add anything.
@Alex: Regarding statistics and sampling methods, I should mention that our brethren who work in the solid rockets area are far, far more experienced and beholden to statistical analysis to ensure performance and quality. Whereas with a liquid rocket engine you have the opportunity for a test run as a final check that all is well with the engine, you obviously cannot do that with a solid rocket. There is no green run of the propellant grain. Once it is lit, it is going to finish burning and then that’s that. So their operation depends heavily upon statistical process control including a very structured sampling and demonstration regimen. It’s really quite impressive what they do.
With regards to computer models, I did not mention them explicitly because they represent a “how,” not necessarily a “what.” How do I design a rocket engine? In the old days, you used a pen and vellum. Today we have three-dimensional computer models of every single piece part. If we want a “drawing,” we have to pull that two-dimensional representation out of our three-dimensional model. How do I verify engine performance from test stand data? Well, we have computer models called data reduction schemes that take the raw data from the engine and test stand instrumentation and turns that into things like thrust and propellant flow rates. We have tools that compare the test results at the detailed level of measured pressures and temperatures and rotational speeds to our computer models for the engine power balance. This is called model validation. We use the test data to confirm our modeling of engine performance or, sometimes, figure out where we screwed up a bit in our modeling assumptions.
Model validation is also extremely important for all of the structural analyses throughout the engine. Very often we have computer models for the expected environments to which the engine components will be exposed and then we have computer models of the components themselves and their response to these environments. We use as much test data as we can collect to validate these models for both the environments and for component responses. Thus, we use models on the front end of the design phase and, upon model validation, use them on the back end for verification purposes. And, of course, I cannot even imagine attempting to do the detailed bookkeeping of configuration management or quality assurance without our computer tools.
Thus, computer models and/or tools are an intrinsic part of every aspect of rocket engine development. However, on the other hand, our space industry forefathers did remarkable things without many of these tools at their disposal. So, could you build and fly human-rated rockets to space without today’s computer models and tools? Of course you could. The development program would look different, but it’s already been done. However, we think that our computer models and tools that we have today make the process more efficient and, we believe, helps us produce safer, more thoroughly understood products.
Finally, just FYI, I’ve never been a fan of “crew-rated” simply because it leads folks to describe the vehicle as “crewed.” That’s just too close to “crude,” as in the opposite of sophisticated and complicated, and that’s not how I’d want folks to think about the work we do around here (…even if it does sometimes accurately describe the people doing the work, LOL).
@Guest: regarding test stand A-3.
The unique capability that test stand A-3 provides (or will provide when it is complete and operational) is altitude simulation from start through shutdown for a large rocket engine. Test stand A-2 has a passive diffuser, meaning that once the engine is up and running, thanks to the work of Bernoulli effects, we achieve a limited amount of altitude simulation during engine mainstage. However, at start the pressure that the engine sees is ambient sea level pressure and, upon initiation of the shutdown sequence, the diffuser rapidly rises back up towards sea level pressure. Test stand A-3 has an active diffuser, meaning that it is “sucked down” prior to engine start and maintained at low pressure (i.e., altitude simulation) throughout the test.
While that doesn’t sound like much of a difference, you have to realize that is only under these conditions that the full nozzle extension of the J-2X can be tested. If we tried to test the full nozzle extension of the J-2X with sea level pressure downstream, even if just during the start transient, the nozzle flow would be so violently separated that nozzle would be torn apart.
A-3 is the only place that active altitude simulation testing can be accomplished for a large rocket engine. There is a test facility up in Ohio, just south of Sandusky, called Plum Brook Station and associated with the NASA Glenn Research Center in Cleveland that can provide altitude simulation for smaller engines (such as the RL10). There is a facility in Tullahoma, Tennesee called the Arnold Engineering Development Center (AEDC) that could be transformed into a test facility for J-2X and we indeed used for J-2 testing back during the Apollo era. But if that was so transformed, it would only be a temporary facility. AEDC has other customers and it was decided that we needed a permanent and dedicated altitude simulation test facility for long term J-2X production. That was why we endeavored off into the construction of test stand A-3.
Comments are closed.