Dancing the Lunar Transit

By Sarah Frazier
NASA’s Goddard Space Flight Center

On March 6, 2019, our Solar Dynamics Observatory, or SDO, witnessed a lunar transit — where both the Sun and Moon displayed a little odd behavior.

First, there was the transit itself. A lunar transit occurs when the Moon passes between SDO and the Sun, blocking the satellite’s view. But instead of appearing on one side of the frame and disappearing on the other, the Moon seemed to pause and double back partway through crossing the Sun. No, the Moon didn’t suddenly change directions in space: This is an optical illusion, a trick of perspective.

Illustration of the relative motion of the Moon and SDO during the lunar transit
NASA’s Solar Dynamics Observatory spotted a lunar transit just as it began the transition to the dusk phase of its orbit, leading to the Moon’s apparent pause and change of direction during the transit. This animation (with orbits to scale) illustrates the movement of the Moon, its shadow and SDO. Credits: NASA/SDO

Here’s how it happened: SDO is in orbit around Earth. When the transit started, the satellite was moving crosswise between the Sun and Earth, nearly perpendicular to the line between them, faster than the Moon. But during the transit, SDO started the dusk phase of its orbit — when it’s traveling around towards the night side of Earth, moving almost directly away from the Sun — but no longer making any progress horizontally to the Sun. The Moon, however, continued to move perpendicular to the Sun and thus could “overtake” SDO. From SDO’s perspective, the Moon appeared to move in the opposite direction.

The second, subtler part of this celestial dance seemed to come from the Sun itself. If you look closely, you may notice the Sun seems to wiggle a bit, side-to-side and up and down, during the transit. That’s another result of SDO’s perspective, though in a different way.

SDO relies on solar limb sensors to keep its view steady and focused on the Sun. These limb sensors consist of four light sensors arranged in a square. To keep the Sun exactly centered in its telescopes, SDO is trained to move as needed to keep all four sensors measuring the same amount of light.

But when the Moon covers part of the Sun, the amount of light measured by some of the sensors drops. This makes SDO think it’s not pointed directly at the Sun, which would cause SDO to repoint — unless that function gets overridden.

Since SDO’s fine guidance system wouldn’t be much use during a lunar transit regardless, the mission team commands the spacecraft to disregard limb sensor data at the beginning of such transits. This loss of fine guidance accounts for some of the Sun’s apparent movement: SDO is now pointing at a general Sun-ward spot in space, instead of keeping its view steady using the much more accurate limb sensors.

The other factor behind the apparently wiggly Sun is temperature. SDO’s instruments are designed to work in the full glare of the Sun’s light and heat. When the Moon’s shadow passes over the spacecraft, the instruments quickly cool in the vacuum of space and start to bend and flex. The flexing of the front part of the telescope can make it look like the image is moving around in the frame.

SDO’s operators use strategically-placed heaters onboard the spacecraft to minimize this flexing as much as possible and to get back to providing science-quality data — images that are focused, centered and steady — as quickly as possible.

You can see and download SDO’s data — science-quality and otherwise — at sdo.gsfc.nasa.gov/data.

The Story of Argo Sun

By Tom Bridgman, Ph.D.
NASA’s Goddard Space Flight Center
Scientific Visualization Studio

The Argo Sun Visualization. Credit: NASA/Tom Bridgman

In my nearly 20 years making visualizations at NASA’s Scientific Visualization Studio, “Argo Sun”— a simultaneous view of the Sun in various wavelengths of light — is probably one of my favorites.  It is not only scientifically useful, but it’s one of the few products I’ve generated that I also consider artistic.

And like so many things, it didn’t start out with that goal. Some visualization products are the result of meticulous planning. But many, like Argo Sun, are the result of trying to solve one problem and instead stumbling across a solution to a different problem. This is its story.

In mid-2012, NASA’s Heliophysics Division was preparing for the launch of a new solar observatory, the Interface Region Imaging Spectrograph, or IRIS.  The mission was designed to take high-resolution spectrographs of the Sun to study the solar chromosphere, the layer just above the Sun’s photosphere, or visible surface. Scientists hoped IRIS’s data would contribute to solving the coronal heating problem, a long-standing mystery of solar physics that asks why the temperature at the photosphere — 5,770 Kelvin, approximately 10,000 degrees Fahrenheit — rises to millions of Kelvin just a few thousand kilometers higher. Sandwiched inside those few thousand kilometers is the chromosphere, where IRIS would make its observations.

I was involved in producing visualizations for the IRIS mission pre-launch package, which would  demonstrate the scientific value that IRIS would add on top of existing data. I sought out the best data we had on the chromosphere, which came from NASA’s Solar Dynamics Observatory, or SDO. Launched in 2010, SDO takes continuous, full-disk images of the Sun, producing terabytes of data each day. It would be the best starting point for singling out the solar chromosphere.

But the solar chromosphere is very thin. At only about 3,000 kilometers thick, compared to 695,700 kilometers for the entire radius of the Sun, it is about 1/2 of a percent of the Sun’s radius, or 8 pixels in SDO imagery. How could I accurately isolate this thin region in SDO imagery, using only clever data manipulation?

Two facts of physics helped me come up with a strategy. The first was knowledge that the chromosphere sits just on top of the photosphere, surrounding it like a thin wrapper covering a lollipop. The second is that the chromosphere emits light in the ultraviolet range while the photosphere emits light in the visible range. I reasoned that the Sun should look slightly bigger in ultraviolet light (lollipop plus wrapper) than in visible light (the lollipop alone). If I could lay the ultraviolet image on top of the visible light image, those extra few pixels around the edges in the ultraviolet image would be the chromosphere.

But it wasn’t quite that simple — just as visible light comes in a variety of different colors, so too ultraviolet light spans a range of different wavelengths. But SDO imagery easily demonstrated how radically different the Sun looked at different wavelengths. Which wavelength would most accurately identify the chromosphere? I really needed to test out a number of different ultraviolet wavelengths, laying them all on top of one another simultaneously to see what the differences were.

For this comparison to work, I needed two things from the SDO images:

  1. The precise center of the solar disk in the images. If I wanted to overlay the images on top of one another, their centers had better line up.
  2. A consistent scale and orientation. If one image was tilted or more zoomed in, that wouldn’t do either. They had to match scales so any features in each wavelength matched consistently.

But due to slight changes in the orientation of SDO and differences between its several telescopes, the solar images are not always perfectly centered or at precisely the same scale.  When generating movies from individual telescopes, this difference is usually small enough to ignore.  But this alignment was much more critical for a multi-image comparison.  I needed to be sure that any differences between images could reveal the chromosphere, not the quirks of a spacecraft.

It would take almost another year for a solution to those two issues to be found. The first turning point was the Venus transit in June of 2012, when the planet passed between the SDO spacecraft and the Sun. Watching Venus wander across the Sun’s disk in multiple telescopes, the researchers could see exactly where the planet appeared in each filter and thereby tune the image scale and orientation so they matched one another.  These revised parameters were incorporated into SolarSoft — a software package under continuous development for over twenty years by the solar physics community, it is the industry-standard for analyzing data from Sun-observing missions. Now I could re-project the images to a consistent scale and orientation, enabling easier comparison.

But the chromosphere was still just an 8-pixel sliver around the edge of the Sun. Inspiration from a colleague’s work would plant the seed of a solution. In February of 2013, another data visualizer in the SVS presented a draft of a visualization using multi-wavelength data from a new LandSat mission, later released here, where different wavelength filters passed over views of the ground.

Multi-wavelength view of LandSat 8 data. Credit: NASA/Alex Kekesi

Here was a way to compare multiple wavelengths without overlapping them – instead, they are presented side by side as the object of interest passes beneath. It immediately caught my attention as an interesting technique. By the time IRIS’s observations began to roll in, I at last had the germ of an idea for revealing the chromosphere with a multi-wavelength comparison.

To apply this approach to the Sun, the window would have to be circularly symmetric and rotate in a wheel-like fashion. I also needed a window that would work for comparing at least ten different images.  It quickly became clear that each wavelength should be presented as a pie-slice out of an SDO image. For this to work, precise matching across the different images of the center of the Sun, and its scale, was important; fortunately, with the update to our solar data software from the Venus Transit, I had both of those.  Then, using additional software, I was able to write a shader (a software component that maps what colors should be rendered onto an object in a 3-D graphics scene) that could select a pie-slice of a given angular size from the center of the input image and map it into the output image.  By staggering these pie-slices with different wavelengths around a given image, I could lay them side by side.  I also realized that I could control the positioning and width of these pie-slices for each frame of the visualization, allowing them to ‘march’ around the image of the Sun appearing to reveal the view in each wavelength.

My first draft was a colorful wheel of solar imagery, which I titled SDO Peacock. A great beginning.

Generating visualizations from such large amounts of data takes a lot of computer time. Each of the 5,200 frames required loading ten different SDO image files (34 MB each) before even beginning to do the additional color work and controlling which part of each image was visible. The first time I attempted a full movie, it took an entire weekend to process. For a first run, it wasn’t perfect, but it was a taste of what was possible.  There were numerous data glitches in the resulting movie.  Some were due to the occasional bad frame render, others due to buggy intermediate data files left over from testing.

As the work continued, I began to feel a little strange about referring to it as a peacock — at the time, the SDO mascot was a rubber chicken called Camilla Corona, plus, as someone who grew up with the classic color peacock logo used by the NBC television network, it seemed a little awkward.

Camilla Corona, the NASA SDO mascot. Credit: NASA Solar Dynamics Observatory

After a little digging, I came across the story of Argus Panoptes, the creature from Greek mythology who not only had many eyes, but according to the mythology, retained a connection to peacocks.  It somehow seemed appropriate.  I shortened the name to Argo Sun and the name stuck.

Drawing of an image from a 5th century BCE Athenian red figure vase depicting Hermes slaying the giant Argus Panoptes. Note the eyes covering Argus’ body. Credit: Wilhelm Heinrich Roscher (Public domain)

There were a number of small changes, edits and fixes over the next few weeks.  Just prior to the main release, a short trailer was produced with a music track and the final version was released December 17, 2013 – a year and a half after I’d first started thinking about it.

So just how well could you see the chromosphere with these SDO images? Adjusting the width of the filter wedges to much narrower angles and positioning them, it’s possible to generate an image zooming in to the solar limb for a view.  The results almost generate more questions than answers.  The fuzziness at the limb — along with irregularities created by solar features in the chromosphere and the way the limb brightens when seen in ultraviolet wavelengths — makes this boundary very difficult to identify.

How well you could distinguish the chromosphere with this technique? Not very well. Credit: Tom Bridgman

In the final analysis, I have to admit,  the technique did not work great for showing the solar chromosphere on most displays. . . But the payoff was, nevertheless, a fascinating way to illustrate how radically different solar features appear in different wavelengths of light.  As each feature moves from one filter to the next, different features appear and disappear depending on the wavelength of light: filaments off the limb of the Sun that are bright in the 30.4 nanometers filter, appear dark in many other wavelengths and sunspots which are dark in optical wavelengths are festooned with bright ribbons of plasma in ultraviolet wavelengths.  I’ve had several scientists tell me this is one of the best ways to illustrate WHY we observe the Sun in so many different wavelengths – and while that might not have been my original goal, it’s one of the reasons why it turned out to be a fantastic success.

Artifacts and Other Imaging Anomalies Taken by NASA’s Solar Imagers

By Steele Hill
NASA’s Goddard Space Flight Center

NASA’s Sun-observing spacecraft produce some pretty breathtaking images of our star — everything from detailed closeups of its surface, to wide-field views of its expansive outer atmosphere.

Credit: NASA/SDO







But on occasion, the acrobatics of light that can produce some odd photographic effects. Here are some of the more common imaging anomalies and explanations for why they occur.

1. Bending

Coronagraphs are designed to image the Sun’s corona, or outer atmosphere — but occasionally, other astronomical objects sneak into the picture. When they do, they can produce some strange image artifacts.

In some cases, the artifact is due to the instrument itself getting in the way. For example, note the “butterfly” shape of Venus in the STEREO coronagraph (COR2) image below at the 10 o’clock position. That’s caused by diffraction, or bending, of Venus’s light off of the occulter stem — the  strip of material, too out-of-focus to be seen in this image, that holds the dark disc in the center to block the bright Sun.


2. Bleeding

In other cases, the astronomical objects are just too bright, saturating the instrument’s sensitive detectors and leaving vertical or horizontal streaks of light across the image.

For example, consider this video from the SOHO spacecraft, compiled from data taken Jan. 2-4, 2010. As a Sun-grazing comet streams across the sky, Venus is visible just to the lower right of the Sun. Notice how the planet’s light smears out to both sides — that’s the “bleeding” of the excess signal along the detector’s columns.  Often the heads of bright comets will show the same aberration. (The attentive observer will notice Mars, a small dot of in the upper left, moving left to right).


3. Blooming

In a different scenario, NASA’s Solar Dynamics Observatory captured this X7 (major) solar flare erupting on Aug. 9, 2011, shown here in extreme ultraviolet light. The flare caused very bright saturation and “blooming” artifacts above and below the flare region, causing extended diffraction patterns to spread out in an “X” formation across the SDO imager.

Credit: NASA/SDO

4. Banding

As a final example, we look at highly energetic particles that travel through space. Some of these, known as solar energetic particles, originate from the Sun, while others, known as galactic cosmic rays, come from outside the solar system. When they pass through the detectors, they can produce thin bright bands or streaks of light.  This one was observed by a STEREO coronagraph.

Credit: NASA/SDO

Although they may seem pesky, these artifacts and anomalies are normal, expected results from properly functioning spacecraft. But they remind us that images, like any other form of data, don’t speak for themselves: what we see is a product both of nature and the instruments we use to observe it.

Solar X-Rays: how a CubeSat sheds new light on the Sun’s X-Ray emissions 

By Susannah Darling
NASA Headquarters

On December 3rd, 2018 the second Miniature X-Ray Solar Spectrometer, MinXSS-2, was launched. MinXSS-2 is a NASA CubeSat designed to study the soft X-ray photons that burst from the Sun during solar flares. Along the way, it may answer a long-standing mystery of what heats up the Sun’s atmosphere, the corona. Let’s explore the data from the CubeSat’s predecessor, MinXSS-1, and the science technique known as X-ray spectroscopy that it uses.

Think of a prism. As white light passes through a prism, it’s split into its different wavelengths and you can see the rainbow. Visible light spectroscopy is often done in high school physics classes where light emissions from certain chemicals are divided and analyzed with a diffraction grating.

When the light comes from a specific chemical, however, we don’t see the full rainbow – instead, we see tiny slivers of light from the rainbow, known as spectral lines. Hydrogen, for example, leaves four lines: one purple, one darker blue, one lighter blue and one red, making it very easy to identify.

Spectral lines corresponding to Hydrogen. Credit: Merikanto, Andrignola, CC-BY-0, via WikiMedia Commons

Every chemical leaves its own ‘fingerprint’ in the form of spectral lines. Spectroscopy uses them to work backwards and figure out the chemical composition of the material that produced the light.

X-ray spectroscopy works very similarly to visible light spectroscopy, except the lines aren’t in the visible range. Instead of a prism, researchers use a small silicon chip that the photons pass through. As these photons pass through the silicon chip, they leave a charge behind; that charge is sorted into a bin based on the amount of the charge, which identifies its wavelength. If you think back to the prism analogy, the charges are the specific colors and the bins would be the type of colors. Pale blue would go in the blue bin, jade would go in the green bin. With enough photon charges sorted in bins, you have an X-ray spectrum that allows you to determine the chemical compositions of solar flares.

Just as in visible light spectroscopy, in X-ray spectroscopy each chemical composition leaves a fingerprint of evidence: Different chemicals lead to different charge intensities. MinXSS uses these to determine the abundance of different chemicals present on the Sun.

But the Sun isn’t just a homogenous mix of chemicals — rather, different layers of the Sun contain different chemicals, and scientists have a pretty good understanding of which chemicals are where. So, when MinXSS observes a burst of X-rays from a solar flare, researchers can look at the abundance, and the specific compositions, of the chemicals observed, and identify which layer of the Sun those X-rays seem to come from. This way, scientists can determine the source of the flare – and, in turn, help determine which layer of the Sun is causing those flares to heat the corona, the Sun’s outer atmosphere, to multi-million degree temperatures.

Take a look at the following graph, showing data from MinXSS-1. The graph shows the abundance factor — a ratio of chemical elements that helps scientists identify different layers of the Sun — and how it changes over time. The vertical axis of this graph is the abundance factor, and the horizontal axis is time. Watch the green dots as time goes along the graph, from left to right:

Credit: NASA/MinXSS/Tom Woods

Starting on the left side of the graph, the green dots all match typical coronal measurements — indicating the X-rays came from the corona. At approximately 2 a.m. on July 23, 2016, a M5.0 solar flare occurred. During the solar flare, the composition of the chemicals suddenly looks more like those that typically come from the photosphere — the visible surface of the Sun — rather than the corona above. This indicates that the source of the solar flare — and the heat it produced — came up from the photosphere.

The following graph of the same event, also from MinXSS-1, looks at the irradiance of the X-rays, or the density of the photons over an area during a period of time. Here, we see a 200-fold increase in the irradiance that occurred during the flare.

Credit: NASA/MinXSS/Tom Woods

This graph has a lot going on, so let’s break it down. The vertical axis is the aforementioned irradiance, or the density of the photons over an area during a given time period. The bottom horizontal axis is the energy observed, and the top horizontal axis shows the wavelength that corresponds to those energies. The green line is the observations of irradiance before the M5.0 flare, and the black line is during the flare itself. Along the black line, the chemicals that corresponds to the energy/wavelengths are also labelled.

As this graph shows, once the flare hit, all of the measurements shift upwards from the green line to the black line: The overall irradiance of the X-rays increased by a factor of 200.  You can also see there are significant spikes at wavelengths/energies corresponding to Iron (Fe XXV), Silicon (Si) and Calcium (Ca), indicating that these chemicals played a large role in the solar flare, and the coronal heating it produced.

Now MinXSS-2, the next generation of MinXSS spacecraft, has begun to take science data, with updated instruments that will give even more detailed data on the solar soft X-rays. You can follow along with MinXSS-2’s journey through their twitter, the MinXSS website or for even more science data dives keep an eye on The Sun Spot.

Eavesdropping in Space: How NASA records eerie sounds around Earth

By Mara Johnson-Groh
NASA’s Goddard Space Flight Center

Space isn’t silent. It’s abuzz with charged particles that — with the right tools — we can hear. Which is exactly what NASA scientists with the Van Allen Probes mission are doing. The sounds recorded by the mission are helping scientists better understand the dynamic space environment we live in so we can protect satellites and astronauts.

This is what space sounds like.

To some, it sounds like howling wolves or chirping birds or alien space lasers. But these waves aren’t created by any such creature – instead they are made by electric and magnetic fields.

If you hopped aboard a spacecraft and stuck your head out the window, you wouldn’t be able to hear these sounds like you do sounds on Earth. That’s because unlike sound — which is created by pressure waves — this space music is created by electromagnetic waves known as plasma waves.

Plasma waves lace the local space environment around Earth, where they toss magnetic fields to and fro. The rhythmic cacophony generated by these waves may fall deaf to our ears, but NASA’s Van Allen Probes were designed specifically to listen for them.

The Waves instrument, part of the Electric and Magnetic Field Instrument Suite and Integrated Science — EMFISIS — instrument suite on the Van Allen Probes, is sensitive to both electric and magnetic waves. It probes them with a trio of electric sensors as well as three search coil magnetometers, which look for changes in the magnetic field. All instruments were specifically designed to be highly sensitive while using the least amount of power possible.

As it happens, some electromagnetic waves occur within our audible frequency range. This means the scientists only need to translate the fluctuating electromagnetic waves into sound waves for them to be heard. Effectively, EMFISIS allows scientists to eavesdrop on space.

When the Van Allen Probes travel through a plasma wave with fluctuating magnetic and electric fields, EMFISIS studiously records the variations. When the scientists compile the data they find something that looks like this:

Whistler Waves Recorded by NASA’s Van Allen Probes. Credit: University of Iowa

This video helps the scientists visualize the sounds coming from space. The warmer colors show us more intense plasma waves as they wash over the spacecraft. For these particular waves generated by lightning, the higher frequencies travel faster through space than those at lower frequencies. We hear this as whistling tones decreasing in frequency. These particular waves are an example of whistler waves. They are created when the electromagnetic impulses from a lightning strike travels upward into Earth’s outer atmosphere, following magnetic field lines.

Below 0.5kHz (the very bottom of the graph in the video) the sound is filled with what are known as proton whistlers. These types of waves are generated as a result of lightning strike-triggered whistlers interacting with movement of protons, not electrons. Recently, NASA’s Juno mission recorded high frequency whistlers around Jupiter — the first time they’ve been heard around another planet.

In addition to lightening whistlers, a whole ­­­­­menagerie of phenomena has been recorded. In this video we hear a whooping noise made by another type of plasma wave — chorus waves.

Chorus Waves Recorded by NASA’s Van Allen Probes. Credit: University of Iowa

Plasma wave tones are dependent on the way waves interact with electrons and how they travel though space. Some types of waves, including these chorus waves, can accelerate electrons in near-Earth space, making them more energetic. Here is another typical example of chorus waves.

Chorus Waves Recorded by NASA’s Van Allen Probes. Credit: University of Iowa

NASA scientists are recording these waves not for musical interests, but because they help us better understand the dynamic space environment we inhabit. These plasma waves knock about high-energy electrons speeding around Earth. Some of those freed electrons spiral earthward, where they interact with our upper atmosphere, causing auroras, though others can pose a danger to spacecraft or telecommunications, which can be damaged by their powerful radiation.

Excitement Increases as Voyager 2 sees a decrease in Heliospheric Particles

By Susannah Darling
NASA Headquarters

A few weeks ago, the Voyager 2 spacecraft beamed back the first hints that it might soon be leaving the heliosphere — the giant bubble around the Sun filled with its constant outpouring of particles, the solar wind. In the past few days, we have received even more clues to suggest that that time seems to be on its way.

Back in October, we saw a spike in the counting rate of particles detected by the High Energy Telescope of Voyager 2’s Cosmic Ray Subsystem, or CRS. The CRS High Energy Telescope detects high energy particles that come from outside our heliosphere. A rapid increase in the number of particles counted over time — that is, their counting rate — gave us the first hint that we were getting close to our heliosphere’s boundary, where these interstellar cosmic rays sneak in.

The new data that scientists are talking about comes from the Low Energy Telescope, another CRS telescope on both Voyager 1 and 2. It shows the counting rate of lower energy particles that typically originate within the heliosphere. The counting rate of these particles declines as they approach the heliopause and ultimately drop to near zero at that boundary, where the particles can escape into interstellar space.

In the following graph of the Low Energy Telescope data, right around the beginning of November, you’ll notice a pretty dramatic change: All of a sudden, the Voyager 2 counting rate of low-energy particles dropped, although it hasn’t yet dropped to nearly zero as it did when Voyager 1 entered interstellar space. Scientists will keep their eye on these graphs as one of several indicators to determine when Voyager 2 truly passes outside of the heliosphere.  Once there, Voyager will be poised to share all new data about the nature of space between the stars.

Credit: NASA/JPL/Ed Stone

The vertical axis is the count rate for the heliospheric particles, or how many low energy particles are being detected by the Low Energy Telescope of the CRS every second. The horizontal axis is time, starting in August 2018 and going to November 12th, 2018. However, note that the vertical axis is zoomed in, and stops at 17; while this is a big step in the right direction, the counting rate isn’t yet near zero, which is what we would expect if Voyager 2 was out of the heliosphere.

While there was a drop in the heliospheric particles, at the same time the higher energy telescope observed increased counting rates. This graph displays both the higher energy counting rate data (top graph) together with the lower energy data (bottom graph):

Credit: NASA/JPL/Ed Stone

Voyager 1 data from 2012-2013 is shown in the red lines, with time shifted by 6.32 years. The Voyager 2 data from this year is shown in blue. As you can see, the High Energy Telescope of the CRS on Voyager 2 has been steadily increasing since October 2018, but the past few data points have shot up faster than expected. This loss of heliospheric particles and gains in interstellar particles is expected when leaving the heliosphere, exciting scientists that Voyager 2 is close to crossing the heliopause.

We’ll wait in anticipation to see the path Voyager 2 is taking, closely monitoring the data it sends back. Keep following the Sun Spot to get updates on the data we receive for Voyager 2, and check out JPL’s Voyager and GSFC’s Voyager websites to learn more about the Voyager missions.

How to Be an Orbital Mechanic: Reading Orbit Plots with Parker Solar Probe

By Dr. Tom Bridgman
NASA’s Goddard Space Flight Center

On Oct. 29, 2018, at about 1:04 p.m. EDT, Parker Solar Probe became the closest spacecraft to the Sun, breaking the record of 26.55 million miles from the Sun’s surface set by the Helios 2 in April 1976. But this is just the beginning. Parker Solar Probe — NASA’s mission to touch the Sun — will get closer still.

This process is the result of carefully planned orbital mechanics, which will result in 24 passes around the Sun. Parker starts off in an orbit around the Sun which is the same as Earth’s – that’s where it starts, after all – and gradually moves to a position inside the orbit of Mercury.  To do this, the spacecraft must slow down significantly (see Figure 1).

Figure 1: Parker Solar Probe orbit in the plane of the solar system. Parker orbit data from JHU Applied Physics Lab. Solar system orbit data from JPL/NAIF.

One of the fundamental principles of orbital dynamics is that if you want to change the periapsis, or point of closest approach, of an elliptical orbit, you get the most bang for your buck if you change speed at the apoapsis, or the point when you’re furthest away.

You can see this principle applied in the case of Parker Solar Probe. Figure 2 below plots Parker’s orbital velocity on the y-axis (how fast it’s moving relative to the Sun, in kilometers per second, km/s), with time plotted along the x-axis. Parker is represented by the purple curve; Mercury (black curve) and Earth (blue curve) are included for reference. [Click on the graph to see a full-size version.]

Figure 2: Parker Solar Probe orbit speed plotted with inner solar system planets for comparison. Parker orbit data from JHU Applied Physics Lab. Solar system orbit data from JPL/NAIF.

The first thing you’ll notice is that the purple line is moving up and down quite a bit, indicating changes in its orbital velocity: Parker doesn’t travel at a constant speed throughout its orbit, but rather speeds up and slows down at different points.

The little dots that appear at the spikes and the dips on the curve mark the times when Parker is either furthest from or closest to the Sun on each orbit. The aphelion positions, when Parker is farthest away from the Sun, are marked with red dots: Note that they coincide with the dips in the curve, when Parker has its slowest speed. The perihelion, or close approaches, are marked with green dots, and coincide with the spikes in the graph, where Parker is traveling fastest.

Over time, you can see that the spikes get taller: Parker’s speed at perihelion gets faster and faster.  Although the graph doesn’t directly show this, these increases in speed correspond to Parker’s perihelion moving closer and closer to the Sun: The closer it gets, more of the Sun’s gravitational energy gets translated into the spacecraft’s energy of motion, increasing its speed.  Parker launched from Earth orbit with a speed of about 17 kilometers per second (38,000 miles per hour), slower than the orbital speed of Earth (about 29 kilometers per second or 65,000 miles per hour), enabling it to ‘fall’ towards the Sun.  Accelerating in the Sun’s gravity, it reached a speed of over 95 kilometers per second (212,000 miles per hour) at the first closest approach.  But looking at the graph, we see that Parker will go faster (and closer) still, its final orbit approaching over 190 kilometers per second (425,000 miles per hour).

But how does Parker keep getting closer?  Getting closer to the Sun doesn’t come for free — each shift in the orbit requires the help of gravitational assists from Venus.  Note on the graph above that every time the spacecraft transitions to a higher speed at perihelion, or spike in the curve, there is a prior speed decrease near aphelion, or the dip in the curve, marked on the plot by a thicker red line. For Parker, these speed changes are accomplished with fly-bys of the planet Venus near Parker’s aphelion position. Unlike many gravity assists where spacecraft gain energy from sling-shotting around a planet, Parker is losing energy to Venus in order to slow down. By slowing down at aphelion, the orbit’s overall size decreases, which in turn increases the spacecraft’s speed near the Sun.

Parker doesn’t fly by Venus on every single orbit, it will only go past the planet seven times over the course of seven years – but you can spot the flybys in the graph by noticing a small jag in certain spots. If Parker is accelerating towards the Sun — i.e., on the upward slopes in the graph, after the dip in a curve — the flyby appears as a little jag in the orbit, like the one just after October 2019 and October 2021. However, some flybys occur while the spacecraft is outbound from the Sun and decelerating, like the one near July 2020, which is a little less obvious in the plot.  Each jag represents Parker moving just a bit slower, just a bit closer to the Sun – on each orbit gathering unprecedented, in situ observations of the star we live with.

Voyager 2 May Soon Be Joining Its Twin in Interstellar Space

By Susannah Darling
NASA Headquarters

In 2012, Voyager 1 — one of a pair of deep-space probes launched in 1977 — crossed into a part of space no other spacecraft had ever seen: the interstellar medium. At over 11 billion miles from the Sun, several crucial changes were detected in the data Voyager 1 was sending back to Earth – key observations to show that Voyager 1 was entering interstellar space.

All the planets in the solar system are surrounded by a constant outpouring of material from the Sun, called the solar wind, which creates a giant bubble called the heliosphere.  Eventually, this solar wind peters out, held back by the wind coming from other stars – and this is the boundary that Voyager 1 crossed. (The gravitational effect of the Sun extends much further, so the solar system itself continues out trillions of miles, with additional asteroids and cometary bodies orbiting our star.)

Now, recent data from the Voyager 2 spacecraft gives us the first indication it too is about to cross over the heliopause — the final boundary of the heliosphere — to interstellar space. Scientists are looking to what happened in Voyager 1 observations to estimate where Voyager 2 is in its own journey.  As Voyager 1 neared that border, it began to see more particles that originated outside the heliosphere and fewer that originated from inside it. It also observed that the magnetic field that Voyager 1 was experiencing changed in magnitude, increasing beyond what was typical within the rest of the heliosphere.

Naturally, scientists are watching for similar clues in Voyager 2 data. In September 2018, the spacecraft’s Cosmic Ray Subsystem, or CRS, began to measure an increase in galactic cosmic ray particles that are hitting, interacting or passing through Voyager 2. Our heliospheric particles can be differentiated from galactic cosmic rays by their energy levels: while heliospheric particles have energy signatures above .5 MeV, galactic cosmic rays have much higher energies — over 70 MeV. The increase that the Voyager 2 CRS has seen in 2018 is similar to the jump that Voyager 1 first saw back in May of 2012.

The following graph shows the data from Voyager 1 around the time it crossed over the heliopause. The vertical scale of the graph is the count rate of the galactic ray particles, or how many particles per second are interacting with the CRS on average for each day. The horizontal scale of the graph is the time, from Jan. 1, 2018 to March 14, 2019. (Note: the time has been shifted forward 6.32 years, to line up with the Voyager 2 data in the next graph). Reading from left to right, showing Voyager 1 CRS data over time, the observations show more and more of the high energy particles from the interstellar medium – indicating that the farthest reaches of the solar wind is increasingly unable to halt the progress of the incoming galactic cosmic rays.

Voyager 1 data, with time shifted forward 6.32 years. Credit: NASA/Ed Stone

This next graph adds the observations from Voyager 2’s CRS from September 2018, overlaid on top of the Voyager 1 data from 2012:

Voyager 2 data overlaid on Voyager 1 data. Credit: NASA/Ed Stone

Note how similar the two data sets are. This is our first hint that Voyager 2 is crossing through a region similar to the one that Voyager 1 did – it’s nearing the heliopause.  Perhaps the data will continue to match up perfectly – and we can expect Voyager 2 to cross out of the heliosphere by January 2019, three months after first spotting elevated galactic cosmic rays, just as Voyager 1 did.

But, perhaps not.

There are several differences between Voyager 1 and Voyager 2 that make the exact date of the crossing difficult to predict.

For one, Voyager 2 is travelling at a different angle than Voyager 1. Voyager 1 is travelling at a northern angle to the solar equator, while Voyager 2 is travelling at a southern angle. This is because of a change at Saturn, where Voyager 1 took a detour to take a closer look at Titan while Voyager 2 continued its original path. As the exact shape of the heliosphere at the boundary is not well known, these different trajectories mean that it is difficult to determine where exactly Voyager 2 will cross into the interstellar medium.

Credit: NASA

Voyager 1 also travels at 38,027 mph, while Voyager 2 goes at a reduced pace of 34,391 mph. That 3,636 mph, a 10% difference, means that Voyager 2 may take longer to cross the heliopause than Voyager 1.

Finally, there’s also the difference in the solar cycle – when Voyager 1 crossed the heliopause, the solar cycle was approaching a maximum, the phase of its cycle when it’s most active and expelling the most material. But as Voyager 2 nears the heliopause, the Sun is approaching solar minimum. This means the heliosphere itself may change shape and size, which also makes it more difficult to predict exactly where or when the heliopause will be crossed. It could take longer to cross over the heliopause, or it could happen much faster due to all these factors.

Once Voyager 2 crosses from the heliosheath into the interstellar medium, the galactic cosmic rays will level out, or plateau, much like they did with Voyager 1. Then the heliospheric particles will lessen and die down, and the magnetic field might increase in magnitude. These are the trends Voyager 1 saw, and we can expect to see them again.

Scientists will also be watching for a whole new set of observations: Voyager 2 still has an instrument powered on that was not working on Voyager 1 when it crossed the heliopause. Voyager 2’s Plasma Science instrument can measure the density, temperature and speed of the solar wind plasma, which may give more information about the differences between the heliosphere and the interstellar medium.

So, when will we get there? We’re not exactly sure — but that’s the exciting part.

Learn more about the Voyager missions and keep an eye on The Sun Spot for more information as we follow Voyager 2 into the unknown.

Here’s a Coronal Mass Ejection Right Before It Hit Earth

By Miles Hatfield
NASA’s Goddard Space Flight Center

On Aug. 20, 2018, a Coronal Mass Ejection — an explosion of hot, electrically charged plasma erupting from the Sun — made its way towards Earth. By Aug. 26 it had hit — and aurora were visible as far south as Montana and Wisconsin in the United States.

NOAA’s DSCOVR satellite (short for Deep Space Climate Observatory) watched it all go down. DSCOVR’s measurements track magnetic field strength and direction – two aspects of a CME that determine how much it will affect Earth.  These data, and the unique view of a CME that they provide, are why DSCOVR is such a useful tool for NASA’s space weather forecasters, detecting CMEs between 15 minutes to an hour before they strike Earth. Here’s a plot showing what DSCOVR saw before the CME hit it, while it was passing over, and after it passed.

The two lines show:

  • The total magnetic field strengtha combined measure of the magnetic field strength in the north-south, east-west, and towards-Sun vs. away-from-Sun directions; and
  • The north-south magnetic field strength on its own.
    (The units are in nano-Tesla — named after Nikola Tesla, the famous physicist, engineer and inventor).

(You might be wondering: Why single out the north-south magnetic field strength (red) if it’s included in the total magnetic field strength (blue)? Because Earth’s magnetic field also runs along the north-south direction — and this leads to a very special interaction that can make CMEs especially dangerous. More on that below.)

Let’s walk through what happened.

  1. Before CME hitsBefore the CME hits, DSCOVR is surrounded by the solar wind. Compared to a CME, the solar wind’s magnetic fields tend to be a little more chaotic — note the squiggly lines to the center-left of the graph. (Further to the left you can see traces of a weak CME that passed by DISCOVR on Aug 24).

Some CMEs leave the Sun so fast that they create a shock: a pile-up of solar wind plasma at their front end that creates jagged lines in DSCOVR data, like a Richter scale during an earthquake. This CME was moving comparatively slowly, so no real shock is apparent.

  1. The CME hitsAs the CME hits, DSCOVR’s total magnetic field readings (blue) get stronger. The north-south component (red) starts to plunge below zero — not all CME’s have a strong negative north-south component like this one did.

When the red line is above zero, that means that the magnetic field hitting DSCOVR is heading primarily in the northward direction, the same as Earth. No problem there — the incoming magnetic energy simply slides right along with Earth’s own fields. But if the red line goes below zero — the magnetic field direction heads south — then the incoming magnetic field is oppositely aligned to Earth’s. And you probably remember from childhood that opposite magnetic poles attract. If a CME has a strong southward magnetic field it can create havoc with Earth’s magnetic fields, peeling back the outward layers like taking the skin off of an orange. This allows particles to sneak past the magnetosphere’s boundary and rain down toward Earth.

The beginning of a CME looks very similar to the regular solar wind, so in real-time, just as it starts, space weather forecasters check plots of plasma density and speed (not shown here) to help determine if they’re really seeing a CME.

  1. DSCOVR is inside the CMEOnce inside the CME, the magnetic fields become stronger and in this case the north-south component stays largely negative (remember, that means the magnetic field is directed south and the CME can more easily disturb Earth’s fields). A CME is like an intact chunk of the Sun that has exploded outwards, taking its structured magnetic fields with it; the solar wind is more like shrapnel. Towards the end (right side) of the CME, DSCOVR was hit with a high-speed stream of solar wind — you can see that the magnetic fields start looking a little messier.
  1. The CME dissipates

Once the CME has passed, it leaves gusty, turbulent plasma in its wake before petering out into the solar wind. Magnetic field readings return to their normal levels.

Depending on DSCOVR’s observations and further simulations, warnings may be sent out to agencies that operate satellites. The Aug. 20 CME (it hit Earth on the Aug. 25, but CMEs are labeled based on when they left the Sun) was not fast enough to warrant these alerts. However, its strong north-south component was strong enough to generate a geomagnetic storm: on the 0 – 9 Kp scale, which measures the disturbance in Earth’s magnetic fields, this one clocked in at a 7. News outlets and blogs reported on it, and aurora sightings right after the event were documented on Aurorasaurus — NASA’s aurora-detecting citizen science collaboration, where real-time aurora sightings are scraped from the web via Twitter or reported directly on their website. Here’s a photo from one user in Fairbanks, Alaska, posted just after midnight on August 26.


The aurora outside Fort Wainwright, AK on Aug 26, 2018. Credit: Jennifer Ocampo/@jennifernocampo

Not bad!

Solar Cycle 24, in X-Ray Vision

By Miles Hatfield
NASA’s Goddard Space Flight Center


September 22, 2018 marked the 12th launch anniversary of Hinode — a solar observatory collaboration between the Japan Aerospace Exploration Agency, the National Astronomical Observatory of Japan, the European Space Agency, the United Kingdom Space Agency and NASA.

Twelve years is long enough for Hinode to observe most of a complete solar cycle. The above image represents Solar Cycle 24 as observed with Hinode’s X-Ray Telescope, or XRT. The XRT observes the Sun’s hot corona, or solar atmosphere, in soft X-rays — wavelengths of light that reveal solar activity reaching tens of millions of degrees Fahrenheit.

The solar cycle refers to an 11-year-long period (on average) during which the Sun’s magnetic field flips — north becomes south, and vice versa — and magnetic activity increases and then decreases. Although driven by the Sun’s internal magnetic dynamo, the progression of the solar cycle is marked by activity visible on its surface and in the corona, including bright solar flares and dark sunspots. The corona, as documented by Hinode’s XRT, reveals an extensive amount about the Sun’s variable activity.

In the graphic above, the farthest (smallest) image is from 2007 and each image increments clockwise by one year. The nearest (largest) image is from 2013, around solar maximum. Note the enhanced presence of bright active regions in the closer images, as the Sun approaches solar maximum. Solar Cycle 24 began on January 4, 2008 with the emergence of a bright active region in the north, and is expected to reach its minimum sometime in 2019.

Hinode’s XRT has taken full-disk synoptic images twice daily for the whole mission (aside from its monthly 3-day maintenance periods).  This long baseline set of measurements extends the cycle of observations from Hinode’s predecessor, Yohkoh, which ended in 2001.

Hinode maintains a polar orbit around Earth from approximately 370 miles altitude, carrying three scientific instruments: the Solar Optical Telescope, the X-ray Telescope, and the Extreme Ultraviolet Imaging Spectrometer.