Welcome back to the J-2X Doghouse. We’re going talk about some test results and test data, exactly what Data Dogs love most to do.
Back in the day — back before I had the carefully regulated, federally mandated, and strictly enforced lobotomy that allows people into the ranks of management — I was once an analyst. And, since it seems so long ago that it doesn’t sound like bragging anymore, I will admit that I was pretty good at it. I absolutely loved the process of using fundamental physics or empirical correlations for fluid dynamics, thermodynamics, and heat transfer all together to simulate in computer coding how things function in the real world. Whereas many people enter the field engineering because they like mechanical things or electronic things, there are some of us who relish the seeming purity of problem solving in abstraction.
Over the years, working on many diverse projects and building many diverse mathematical models to simulate many diverse systems, I came to the realization that my models always appeared most unassailable and brilliant when there was no test data against which to compare them. To put it bluntly, test data always proves that you simply ain’t as smart as you thought you were. But, that’s okay. If that wasn’t the case, then you wouldn’t bother to test. The whole point of testing to gather data and learn more.
With all due respect to the Serenity Prayer, this ought to be the analyst’s prayer relative to testing:
Grant me —
— the results to validate that which I do understand
— the data to explain that which I did not understand
— and the openness to accept that I can always understand better
That last line is critical. Ignoring data contrary to what your model output is a seductive, addictive, and dangerous path to follow. We don’t/won’t do that.
That brings us to the subject of test A2J003 of the J-2X development engine E10001. This was our first test to mainstage operation. The planned duration was to be seven seconds. On Tuesday 26 July, right around five in the afternoon, the test ran for 3.72 seconds and then shutdown. We did not accomplish the full duration. Why? Basically because we ain’t as smart as we thought we were. We had analytical models telling us that performance would be X, but the hardware knew better and decided on its own to perform to Y. Here is a cool video of the test:
A more detailed explanation of what happened is that the engine shutdown due to the measurement of a pressure too high in the main combustion chamber. The measurement crossed a pre-set “redline” and the test controller unit took the automatic (and autonomous) action of shutting down the engine in a safe and controlled manner. The high pressure in the main chamber was caused by higher than expected power coming out of the oxidizer pump. This, in turn, was due to more power being delivered to the turbine side than expected. It comes down to a fluid dynamics phenomenon (pressure drops) and what we have is not inherently bad, just different than expected. So, in essence, we used our models to predict that the pressure in the main chamber would be at a certain level — indicating a certain power level — but the different performance of the hardware resulted in pushing us away from our analytical prediction.
- Here is the good part: We learned something. We learned that our model needs to be updated and we collected the data that will allow that to happen.
- Here is another good part: We got enough data, despite the short duration, to recalibrate the engine for the next test thereby making it far more likely that we will hit our target. .
- Here is yet another good part: We had a successful demonstration of the test controller redline system by safely shutting down the engine. The engine looks fine. The controller did exactly what it was supposed to do and protected the hardware. In fact, for these early tests we have the redlines clamped down pretty tight specifically to protect the hardware as we learn more about the engine..
- And here is, finally, yet another good part: Other than the power applied to the oxidizer turbopump, most of our other predictions with regards to hardware performance appear to be awfully darn good. So, we’ve got a preliminary validation for much of our understanding of the engine. Indeed, this is a brand new engine and we have just accomplished mainstage operation in the second hot-fire test. That is truly unprecedented..
- Here is the bad part: We have to spend a few minutes explaining to folks not directly involved that despite not achieving full duration, the test was in reality a total success.
If that, then, is the bad part, I can live with it. I can live with admitting that we ain’t as smart as we thought were. Why? Because now, after the test, we are indeed smarter. And we will continue to get smarter and smarter about the J-2X design until, one day, we will be smart enough to say that, yes, we understand this engine so well that it is safe enough to propel humans into space.