The Vision Thing

George Herbert Walker Bush, the 41st President, calls Houston home.   He and Barbara can be frequently seen at Astros baseball games, he made his early career in business here, was a congressman from here, even once taught a course as an adjunct professor at my alma mater, Rice University. 

 

As you may recall, Mr. Bush failed in his re-election bid; there were a number of reasons for his loss, but one of the frequently cited reasons was “the vision thing”.  The critics felt that he had not clearly articulated his vision for the future of the nation, which is a vital function that an effective chief executive must do.  Keep that in mind. 

 

On July 20, 1989 – the 20th anniversary of the first lunar landing – President G. H. W. Bush had made a speech proposing what would come to be known as the Space Exploration Initiative (SEI).  This proposal included a permanent return to the Moon and human missions to Mars.  In spite of the bold and visionary words, this initiative quickly failed. 

 

Thor Hogan has written an excellent book on the history of the fiasco.  His book is “Mars Wars, The Rise and Fall of the Space Exploration Initiative” NASA SP-2007-4410, August 2007.  I highly recommend this book for those who are interested in how national space policy is made and how federal agencies can be dysfunctional at times.  Dr. Hogan is a Professor of Political Science at the Illinois Institute of Technology in Chicago.  Let me reiterate – I highly recommend this book if you are interested in these topics.

 

When  “Mars Wars” was first published in 2007, it was widely circulated in NASA management and caused considerable discussion.  Everyone was trying to make sure to avoid the mistakes made in 1989 and create a successful program.  There are many lessons to be learned from that earlier experience, and Dr. Hogan hit many of them. 

 

The most frequently cited lesson from SEI is the need to build a “sustainable” program.  That is a shorthand way of saying stay within an affordable budget.  One of the principle  reasons the Space Exploration Initiative failed was its price tag.  The SEI package was dead on arrival at Congress because of the high cost.  Trying to apply this lesson, the NASA program of the last five years to send crews to the Moon and Mars strove mightily to remain within the budget line announced in 2004.

 

But back to 1989, because there are other lessons to be learned there.  That year I was a rookie Space Shuttle Flight Director, learning the ropes in Mission Control, and I had no time to be involved in SEI.  But I was immersed in the NASA culture of the day and I remember that time with great clarity.  Dr. Hogan’s “Mars Wars” book does a wonderful job of capturing the motivations of agency personnel in those days. 

 

It was a bare three years following the Challenger accident and the wound was still raw.  Challenger was a driving factor in the SEI story.  Remember that widely held beliefs are important whether they are accurate or not.  That is because what people believe to be true motivates them.  So take the following paragraph not necessarily as historical truth but as the mythos which psychologically undergird folks running SEI.

 

NASA was not allowed to build the shuttle the “right” way, that is, to make both engineeringly elegant and safe.  Lives were lost in Challenger due to basic design choices forced on the agency by severe budgetary restrictions.  In the early 1970s, when NASA was authorized to design and build a revolutionary reusable winged space vehicle, the Office of Management and Budget capped the total development cost of the shuttle at $5 billion.   (Money went further back then.)  This cap was far too low to allow development of several of the more innovative design options.  Fly back liquid fueled boosters were out of the question, for example.  Operational costs were higher because of choices required to keep the development costs low.  Safety was lower.  So we got an aero-space plane with a big dumb drop tank and two scaled up JATO bottles. 

 

After Challenger, there was no money to significantly improve the basic design of the shuttle.  NASA was faced with the prospect of flying its less-than-safe shuttle for the long term, but with even higher operational costs and a lower flight rate.   Anger is the only term I can use to describe the general feeling back then.  Anger that NASA was forced to build something less than perfect by green eyeshade bean counters in Washington.  Anger that those decisions had been the basis for the loss of seven of our colleagues.  Anger that NASA wasn’t given the authorization to build a second generation shuttle to correct those problems.  So when the President announced a plan to build a spaceship to go back to the Moon and on to Mars, the most frequently heard comment around the human space flight institutions was “We’ve got to do this right this time.”  Even the NASA Administrator  Dick Truly was heard to “we’ve got to do it right this time or not at all.” 

 

So when the SEI plan turned out to be very expensive, a significant part of that cost was driven by the thought that it must be done “right”.

 

Fast forward twenty years.  The recent program struggled mightily to stay within the budget because they perceived “sustainability” the principle lesson learned from the last time.  Smarting from the recent loss of Columbia, the organization believed that ‘it still had to be done right’.  So technical performance was not open for compromise.   In program management that means the only relief can come from schedule delay.  Delay that lead to an increasing gap.  Now, that analysis is very, very overly simplistic.  Blog level simplistic, in fact.  I don’t need to connect any more dots from that point, you can do that yourself.

 

I am told that over the last 20 years more than 15 major NASA Human Spaceflight programs have been cancelled.  I can’t recite the whole list but will give a few examples:  X-38, HL-20, X-33/VentureStar, Space Launch Initiative (SLI), and Orbital Space Plane (OSP).  Some never got past the viewgraph stage.  Some were cancelled because of technical issues.  Some were cancelled for budgetary issues.  And some were cancelled for political reasons. 

 

So, in hindsight, was the shuttle built “right?”  That program actually flew, well, late, and only somewhat over budget.  Of course the operations budget never went down like it was supposed to.  The shuttle is clearly not “safe” in the conventional sense, but will space flight ever be “safe” like putting your kids on the schoolbus?  At least the shuttle program actually kinda sorta worked and didn’t get cancelled before the first test flight. 

 

What constitutes “right” in advancing human space flight?  Something reasonably safe and not too costly, something that opens up the space frontier to many people rather than a few?  Is that the foundational paradigm that is being overturned today? 

 

There are more than a few lessons that you can extract from Dr. Hogan’s book.  I strongly suggest you read it.

 

And one more thing:  there is a saying that doing the same thing over and over again and expecting different results is the definition of insanity.  It might appear that we have been doing the “same thing” over and over again and wondering why the result turns out this way.  This time the paradigm is shifting at a foundational level.  Will the new paradigm avoid the same old outcome?

 

That a vision?   Hmm.

Tripping the Boundary Layer – Part 1

As I start this series, it occurs to me that “tripping the boundary layer” could be an article on social change – maybe I’ll do that. 

But for today it is an engineering subject.  So buckle your seatbelt and hold your hat, we are off on an adventure in rocket science!

Aviation has been driven by the desire to fly higher and faster.  Great strides have been made, especially up to the middle 1960’s.  But for the last few decades aircraft have been at a plateau in terms of speed and altitude.  With the exception of rocket powered X planes, the boundary of high performance jets has been just faster than Mach 3 and up to about 100,000 ft.  Even though there is the perennial dream of hypersonic transports carrying passengers across the globe in a fraction of today’s aircraft, we don’t seem to be advancing on that dream.

Part of the problem is we don’t understand how to avoid tripping the boundary layer.  There is precious little data at hypersonic speeds, and computer simulations are no good without data and the formulae derived from data to predict these things:  garbage in; garbage out.

So, to start this discussion off, let us define the terms.  (What the dickens are we talking about?)!   What’s a boundary layer and what does it mean to trip one?

In aviation, the boundary layer is a thin film of air closest to the wing, body, or engine of an aircraft.  At the molecular level, the air immediately adjacent to the airplane is dragged along with the plane.  Infinitesimally farther away, the air is being carried along at some fraction of the speed of the airplane, and at a longer way away from the airplane, the air is not moving at all, or at least not being dragged by the airplane.  That distant air is called the “free stream” and the close by air – which is affected by the passage of the aircraft – is called the boundary layer.  Typically aerospace engineers consider the boundary layer to be that close in part of the air that is being dragged along by the passing of the aircraft at a speed of 5% or more of the airplane.  These boundary layers are thin, inches or fractions of an inch.  They are important because the boundary layer causes most of the drag and most of the heating when an airplane is in flight.

Boundary layers, like all fluid flows, is either laminar or turbulent.  Laminar flow is smooth, turbulent flow is, . . . well,  . . . turbulent.  You can see a good youtube video of this here: 

http://uk.youtube.com/watch?v=NplrDarMDF8

And there is a really good wikipedia article on turbulence here:  http://en.wikipedia.org/wiki/Turbulence

So why is all of this important?  Exactly at this time there is a large effort by many companies and government agencies to develop hypersonic aircraft.  NASA has even sponsored a couple of test flights.  The problem, as it is for all types of aircraft flight, is drag and heating.  When the boundary layer over the wings or in the engine is laminar, there is low drag and low heating; and when the boundary layer is turbulent, drag and heating increase dramatically.  All boundary layers can be “tripped” or transition from laminar to turbulent flow.

In some of these experimental aircraft the engines [called SCRAM jets for Supersonic Combustion Ram jet engines] have only operated for a fraction of a second or a very few seconds.  Why?  Because the designers do not know how to cool them; they don’t understand when or whether the boundary layer inside the engine is turbulent or laminar. 

In some of these experimental aircraft, the engine begins to melt as soon as it is turned on; hence the extremely short operating times.

This is no good for a hypersonic passenger aircraft which might carry a hundred people from New York to Tokyo in a couple of hours. 

Why do we not understand this phenomenon?  Because it cannot be recreated in a wind tunnel or other experimental apparatus.  The wind tunnels that have long enough flow durations to study this phenomenon run only up to about Mach 6.  These hypersonic engines need to perform at Mach 8 or 10 or 12.  There are “wind tunnels” that operate at high Mach numbers but only for fractions of a second; not long enough to understand the way in which a boundary layer works.

No aircraft fly that fast, missiles can achieve it briefly, but there is one platform that spends a serious amount of time flying through the atmosphere at speeds above Mach 6: 

Its the space shuttle. 

Tomorrow I’ll talk about an experiment that will be on the next shuttle flight. An experiment which will study tripping the boundary layer.

With this knowledge, the designers just might be able to make a major advancement toward hypersonic passenger aircraft.

To hold your attention until my next post, here is a true story:

Around 1900 a young graduate student in physics was trying to do research on a problem that could earn him a doctorate degree.  He started out studying the transition from laminar to turbulent flow in fluids.  After months of work and study, he concluded that this problem was too hard.  He would concentrate on an easier subject:  atomic physics.  His name was Niels Bohr and he won the Nobel prize for physics in 1922 for his work in quantum mechanics.  And he was right; turbulence is harder.  And we don’t understand it yet.

 

 

Factors of Safety

     Old joke:  “You see the glass as half empty, I see the glass as half full, but an engineer sees the same glass and says ‘it is overdesigned for the amount of fluid it holds.’”

 

     When an engineer starts out to build something, one of the first questions to be answered is how much load must it carry in normal service?  The next question is similar:  hom much load must it carry at maximum?  An engineer can study those questions deeply or very superficially, but having a credible answer is a vital step in at the start of a design process. 

 

     Here is an example.  If you design and build a step ladder which just barely holds your weight without breaking, what will happens after the holidays when your weight may be somewhat more than it was before you eat Aunt Martha’s Christmas dinner?  You really don’t want to throw out your stepladder in January and build a new one do you?  Obviously you would should build a stepladder that can hold just a little bit more.  Don’t forget what might happen if you loan your stepladder to your coach-potato neighbor who weighs a lot more than you do?  Can you say lawsuit?

 

     So how do you determine what your stepladder should hold?  Do you find out who is the heaviest person in the world and make sure it will hold that person?  Probably not.  Better, pick a reasonable number that covers, say, 95% of all folks, design the ladder to that limit and put a safety sticker on the side listing the weight limit.  Yep, that is how most things are constructed.

 

     But that is not all.  Once you determine normal or even the maximum load it is a wise and good practice to include a “factor of safety”.  That means that you build your stepladder stronger than it needs to be.  This helps with the idiots that don’t read the safety sticker; it also helps protect for some wear and tear, and it also can protect if the actual construction of your stepladder falls somewhat short of what you intended.  So you might build your stepladder with a FS of 2.  That would cover 95% of all folks with plenty of margin for foolish people that try to accompany their friend climbing the ladder; or when your ladder has been in service for 25 years (like mine), or when your carpenter buddy builds the stepladder with 1/4” screws rather than ½” screws like you told him to.

 

     Factors of safety are not pre-ordained.  They have been developed over the years through experience and unfortunately through failures.  Some factors of safety are codified in law, some are determined by professional societies and their publications, and some are simply by guess and by golly.  Engineering is not always as precise as laypeople think.

     

     It’s a dry passage but I’d like to quote from one of my old college textbooks on this subject (Fundamentals of Mechanical Design, 3rd Edition, Dr. Richard M. Phelan, McGraw-Hill, NY, 1970, pp 145-7):

 

“ . . . the choice of an appropriate factor of safety is one of the most important decisions the designer must make.  Since the penalty for choosing too small a factor of safety is obvious, the tendency is to make sure that the design is safe by using an arbitrarily large value and overdesigning the part.  (Using an extra-large factor of safety to avoid more exacting calculations or developmental testing might well be considered a case of “underdesigning” rather than “overdesigning.”)   In many instances, where only one or very few parts are to be made, overdesigning may well prove to be the most economical as well as the safest solution.  For large-scale production, however, the increased material and manufacturing costs associated with overdesigned parts result in a favorable competitive position for the manufacturer who can design and build machines that are sufficiently strong but not too strong.

            As will be evident, the cost involved in the design, research, and development necessary to give the lightest possible machine will be too great in most situations to justify the selection of a low factor of safety.  An exception is in the aerospace industry, where the necessity for the lightest possible construction justifies the extra expense.”

            “Some general considerations in choosing a factor of safety are  . . . the extent to which human life and property may be endangered by the failure of the machine . . . the reliability required of the machine . . . the price class of the machine.”

 

            Standards for factors of safety are all over the place.  Most famously, the standard factor of safety for the cables in elevators is 11.  So you could, if space allowed, pack eleven times as many people into an elevator as the placard says and possibly survive the ride.  For many applications, 4 is considered to be a good number.  In the shuttle program the standard factor of safety for all the ground equipment and tools is 4.  

 

            When I was the Program Manager for the Space Shuttle, there were a number of times when a new engineering study would show that some tool either could be exposed to a higher maximum load than was previously thought, or that the original calculations were off by a small factor, or for some reason the tool could not meet the FS of 4.  In those circumstances, the program manager – with the concurrence of the safety officers – could allow the use of the tool temporarily – with special restrictions – until a new tool could be designed and built.  These “waivers” were always considered to be temporary and associated with special safety precautions so that work could go forward until the standard could once again be met with a new tool.

 

            In the aircraft industry, a factor of safety standard is 1.5.  Think about that when you get on a commercial airliner some time.  The slim factor of safety represents the importance of weight in aviation.  It also means that much more time, engineering analysis, and testing has gone into the determination of maximum load and the properties of the parts on the plane.

 

            For some reason, lost in time, the standard FS for human space flight is 1.4, just slightly less than that for aviation.  That extra 0.1 on the FS costs a huge amount of engineering work, but pays dividends in weight savings.  This FS is codified in the NASA Human Ratings Requirements for Space Systems, NPR 8705.2.  Well, actually, that requirements document only references the detailed engineering design requirements where the 1.4 FS lives. 

 

            Expendable launch vehicles are generally built to even lower factors of safety:  1.25 being commonplace and 1.1 also used at times.  These lower factors of safety are a recognition of the additional risk that is allowed for cargo but not humans and the extreme importance of light weight.

 

            It is common for people to talk about human rating  expendable launch vehicles with a poor understanding of what that means.  Among other things, it means that the structure carrying the vast loads which rockets endure would have to be significantly redesigned to be stronger than it currently is.  In many cases, this is tantamount to starting over in the design of the vehicle.

 

            So to the hoary old punch line:  Would you want to put your life on the top of two million parts, each designed and manufactured by the lowest bidder?

Real Engineers

I earned an undergraduate degree in engineering from a prestigious and notoriously competitive university.  After that I went on to do engineering research and complete a graduate degree in engineering from another major university with a reputation for excellence in engineering; along the way I wrote and defended a thesis and authored several papers which were published in professional engineering journals.

When I came to work for NASA, I was fortunate to get a job in the operations area:  mission control.  A thorough understanding of engineering principles and practices was mandatory for my job.

So I was floored just a few months later when I first heard it:  “you are not a real engineer”. I was just “an ops guy”.

In the NASA pantheon of heros, the highest accolade any employee can be granted is that they are a “real engineer”.  Not even astronauts rate higher.  The heart of the organization worships at the altar of engineering:  accomplishment, precision, efficiency.  What does it take to be a “real engineer”?

In the ethos and mythology of NASA, a real engineer is one who has several characteristics. 

First, they must have a superb grasp of the physics of their subject, a complete an total knowledge of the details of their specialty.  This almost goes without saying.  No nincompoops allowed; no fuzzy thinkers who are vague on the basic concepts.  A “real engineer” knows his arcane stuff forwards and backwards and from the middle out towards both ends and can recite it in his sleep.  “Let me tell you about the inviscid terms of the Navier-Stokes equation . . . ” a real engineer might say. 

Second, a “real engineer” must create something, taking it from original concept to working, functioning reality.  No view graph engineers ever get the title “real engineer”.  If it doesn’t fly or move or compute or generate power, or do some concrete something, you haven’t built something real and without building something real, you are never going to be a “real engineer.”  And the thing has to work; if it flops, then you are merely a tinkerer, not a “real engineer”.

Thirdly, “real engineers” are mild mannered; never needing to raise their voices, not loquacious, not given to long and convoluted discussions.  No, real engineers are soft spoken and terse; they are recognized by their brevity and the ability to concisely summarize a technical point in a way that admits to no further discussion.

NASA is full of “real engineers”. 

So us poor ops guys, who never had a drafting table, who never went into the machine shop to hand blueprints to a tech, who never got to blow up anything on the test stand; we failed miserably on the standard of being a “real engineer”.  We merely operated the stuff that the real engineers built.

Along the same lines . . . .

I have been privileged to watch the advanced concept boys at work.  They are marvelous.  Through the study of all past and current rockets they have developed a number of “empirical models” — rules of thumb if you will — that can help in the initial ideas about building spacecraft.  If you want to lift a certain number of metric tonnes to low earth orbit, given a particular rocket type (solid, liquid, hypergolic, cryogenic, hydrogen, kerosene, etc.) the advanced concept boys can give you a variety of options based on known ratios of structure weight to propellant weight, burn out mass, etc.  And they can give you a rough guess at the cost.  And they can evaluate multiple options and compare them one to another in very short order. 

In the summer of 2002, I got to participate in an exercise for about two months of possible design options of manned missions to Mars.  The advanced concept boys generated a new heavy lift launch vehicle about every other day and could compare all the designs against each other on a number of figures of merit.  Its heady work to invent new Saturn V class rockets in the computer lab.  Taller, shorter, with solid boosters or not, using kerosene or hydrogen or whatever.  One engine, two engines, five engines, twenty engines; two stages, three stages, four.  Whew.  At the end of two months the team had a great list of options and the pros and cons for every launcher.  And I found out that the advanced concept boys have been studying this problem for 40 years!  They have evaluated hundreds, thousands of various options.  Then they refined their studies, re-examined the basis for their methodologies and started in again. 

But you know what?  Advanced concept folks, even with all their knowledge of engineering principles, they are not “real engineers”. 

Real engineering starts after the viewgraphs stop.  Real engineers take the concept — boiled down as it may have been from hundreds of starting options — they take the concept and start making it real.  When you have to really design and build the rocket in its detailed glory; when you have to take the subsystems out to the test stand and start them up and see if they hang together; when you go from weight estimates to actual plans and find out what the gizmo really does weigh — that is real engineering. 

That is where you find out if the concept really will work or not; what the real problems are and how to solve them. 

When the rocket really flies you have proof positive of the real stuff of engineering — and whether you have it or not.

That is what real engineering is all about.