Here’s an idea: why doesn’t NASA put a network in the sky, with each orbiter, rover, space-borne telescope, and any other skyward-launched device working as a node? Why not internetwork space? In fact, why not use the existing Internet?
Over the next several decades, as we embark on the next stage of the Internet’s spread into our solar system, scientists will need to manipulate sophisticated experimental instruments on space stations and exchange vast streams of data with colleagues living on the moon and, eventually, Mars. The network that NASA will soon build could very well be the one over which scientists work out startling details of Martian geology, oceanic conditions under the ice of Jupiter’s frigid moon Europa, or the turbulent cloud cover of Venus. It may well be the way a homesick space explorer sends e-mail back home.
If there were network links to remote probes, scientists could dial in to them as easily as they check the latest CNN.com headlines. All the information generated aboard these vehicles and habitats, from humdrum experiments growing crystals in zero gravity to data showing the existence of fossils of ancient microbes, could come in via a single network extending through the vacuum of space—not just from NASA, but from the European Space Agency, China’s National Space Administration, and other organizations as well. So as we move from space discovery to exploration, and perhaps even extraterrestrial settlements, space engineers have begun to radically rethink how mission controllers could best communicate with Earth’s far-flung emissaries.
Everyone at NASA agrees that extending the Internet to other planets would be ideal. Whether it’s possible, however, and how, has become a source of fractious contention within the agency. Two different cliques have very different ideas about how this should be done.
One team of very smart researchers, most of them working at the Goddard Space Flight Center, in Greenbelt, Md. [see photo, ], is testing ways to use the basic networking protocols that run the Internet. That would let space scientists use all the tools they use on Earth today: Web browsers, file-transfer software, and so on. Using off-the-shelf hardware, as well as reusing existing earthbound software, would save money and development time.
As enthusiastic as these researchers are, another group within NASA has concluded that using Internet protocols in space—at least in deep space—will never work. Like the Goddard group, this camp has some very smart people on its side, including, surprisingly enough, Vinton G. Cerf, an IEEE Fellow who helped write the Internet protocols still used by the billions of computers and other devices on the Internet. According to Cerf and these other NASA researchers, Internet-style chatting with a shuttle 600 kilometers away may be easy enough, but wirelessly conversing with, say, Mars-orbiting craft 200 million km away is an essentially impossible challenge.
What started as a theoretical dispute within NASA is now a practical one, with a hard timetable. In January 2004, the Bush administration announced an ambitious new mission for NASA. It includes a successor to the space shuttles, called the Crew Exploration Vehicle, which is to run its first manned mission in 2014. Also in line are a series of robotic missions to the moon, beginning in 2008.
Once made, a protocol decision may have a lifetime longer than Pluto’s year. The current Internet protocols are based on principles Cerf sketched out on the back of an envelope more than 30 years ago in a San Francisco hotel lobby. NASA has about a year to make network architecture choices that could bind the solar system for decades to come.
Engineers at NASA’s Goddard Space Flight Center thought they had proven that Internet protocols could be extended into outer space in a February 2003 Columbia shuttle experiment. Sadly, that mission was Columbia’s last. Four days later, the spaceship came apart over Texas during reentry.
Earlier in that mission, the NASA engineers had transferred a file between Goddard and the shuttle, which was soaring almost 600 km above Earth. It was the first time that a file from outer space made its way back to a terrestrial command center without having its route set ahead of time. To receive that small but historic transmission, technicians had to orchestrate things so that the communications link with the orbiting spacecraft was handed off, like a cellphone transmission, from one ground station to the next. In other words, the equipment on the Columbia handed the data over to the network, and the network delivered the data to its destination.
The experiment was called CANDOS, for Communication and Navigation Demonstration on Shuttle. It had been a long time coming for Goddard engineer Keith Hogie, whose wire-rim glasses, mop-top haircut, and generally youthful demeanor belie his 54 years. Since the mid-1970s, the lanky engineer has been writing one complicated software program after another, all to do more or less the same thing—download and sort telemetry and other data. Each program worked with a unique piece of hardware, so it had to be written from scratch. But, fortunately for Hogie, a lot of the communications protocols he wrote were pretty generic.
After reinventing this wheel at least four times by the early 1990s, Hogie came to understand the power of the Internet Protocol. IP is the lingua franca for data communications. It’s not just the way bits are packaged for transmission on the Internet but also how they are routed from machine to machine. As happens all the time on the Internet, two computer systems using wildly different hardware—a Hewlett-Packard PDA and an IBM mainframe, say—can pass the data back and forth, so long as they both speak IP.
For Hogie and the rest of NASA’s telecommunications programmers, IP promises to greatly reduce the number of hours spent ensuring that NASA’s diverse spacecraft can communicate with one another and with ground stations. If NASA were to adopt a common platform, along with standards on how data should be formatted, missions could use off-the-shelf communications software packages rather than requiring people like Hogie to write new ones. The success of the Internet over the past two decades has led to some assumptions about how data communications would work everywhere. On Earth, the Internet passes information in the form of data packets, whose bits may represent a Web page or an e-mail. Messages seem to flit from place to place instantaneously.
That isn’t true in space. Message speed is capped by the speed of light, a limitation unnoticeable here but obvious out in space. It takes over a second for light to travel to the moon; light from Mars takes anywhere from 3 to 15 minutes to reach Earth, depending on the two planets’ positions. Hopping through relay satellites, as most transmissions do, might double the transit time. As it turns out, those kinds of delays would doom a space connection using the standard protocols that govern Internet communication, because they require that the sending computer get a confirmation from the recipient machine that each data packet has been received.
IP doesn’t include a mechanism to ensure that packets arrive at their destination, so it’s never used by itself. Almost all Internet communication uses a second protocol as well, the Transmission Control Protocol, or TCP. Cerf and a colleague, Robert Kahn, introduced the pair in a paper in the May 1974 issue ofIEEE Transactions on Communications. Telecommunications protocols are usually thought of as being stacked on top of one another, and in Cerf and Kahn’s scheme, IP lies near the bottom of the stack, just above the physical connection between two devices (cables, radio waves, and so on). TCP operates at the next layer up.
On the Internet, TCP ensures a communications link between two parties by setting up a stream of acknowledgments between them. The receiving computer sends a receipt for each set of packets it gets. If the sending computer doesn’t get these acknowledgments promptly, it assumes the network is congested and slows down the transmission rate, eventually resending the packets it hasn’t heard back about. TCP made the Internet what it is today—always busy but almost never congested to the point of collapse.
The file transfer from the Columbia on that cold night in 2003 was not NASA’s first attempt at extending TCP/IP into the heavens. Spacecraft had made simple connections with Earth, using Internet protocols, several times before. These experiments worked well, but they skirted another major challenge of space communications via Internet protocols: the need to go through multiple ground stations, and the handing-off difficulties this inevitably entailed. As the world rotates on its axis, only a few of the many ground stations scattered around the globe can communicate with an individual spacecraft, itself in motion. Relay satellites can improve a communications link with a ground station, as they remain in line of sight with the craft for longer periods of time. But the problem remains: to get a command to a spacecraft, a control center needs to know which ground station has a “view” of the craft at any given time.
So NASA planners painstakingly calculate ahead of time which ground station their craft can contact at any given moment. The chore involves writing out a timetable of sorts, either on a computer or on a whiteboard. With a craft’s scheduled trajectory in hand, the NASA personnel calculate when it will be in contact with each ground station and schedule a communication session through that particular station. This is work that will grow ever more tedious as NASA puts more craft in flight. Wouldn’t it be great to automate it?
CANDOS showed how Internet technologies could help. On Earth, messages and Web pages don’t travel by precalculated routes. Data are packaged and then volleyed over the Internet by a series of routers—devices that relay packets from one network to another. A router examines the destination of a packet and then forwards it to a connecting router, based on two factors—which routers are closer to that packet’s ultimate destination and which paths have the most bandwidth available at the moment.
Beyond the Internet Protocol: Scott Burleigh is one of several engineers at NASA’s Jet Propulsion Lab who think today’s Internet protocols need to be supplemented with new ones to network remote space.Photo: Thomas Michael Alleman
On Earth, Internet servers (the machines that store Web pages, e-mail, and other data) sit in offices and data centers, as do routers. In space, though, satellites, probes, and other vehicles will have to act as their own servers, and they will always be on the move. So CANDOS tested a new protocol, called Mobile IP. Developed by the Internet Engineering Task Force (an influential volunteer group that sets Internet standards), Mobile IP allows servers to roam through space and still be reached.
NASA’s scattered ground stations are positioned so that a spacecraft can always be in contact with one of them. So for Goddard’s experiment, the team set up routers at ground stations on the island of Guam and at three U.S. locations: White Sands, N.M.; Wallops Island, Va.; and Merritt Island, Fla. The Goddard facility, near Washington, D.C., connected to these routers through an internal NASA network. That done, it could communicate with the shuttle regardless of where it was in orbit or which ground station happened to have the shuttle in sight [see diagram, ].
To prove how powerful this concept of using IP in space could be, the Goddard team set up a log-in account—a user name and a password—for some colleagues at the Marshall Space Flight Center, in Huntsville, Ala., so they, too, could access the ill-fated shuttle’s computer. Without a standard TCP/IP connection, Marshall might have had to commission someone like Hogie to write the software that would make the connections to give access to its engineers. But with TCP/IP, accessing the shuttle was as easy as using an AOL account. Once logged on, technicians could upload or download files, check the logs to see how the onboard server was running, or do anything else the staff at Goddard could do.
The CANDOS trial was so successful that NASA engineers are starting to incorporate some of its technology into the agency’s existing communication networks. CANDOS project manager David Israel and his team are working on something they call NASA Space Network IP Services, based on a set of permanent routers placed at NASA ground stations that will offer researchers the same IP connections that the Goddard and Marshall teams enjoyed. By 2007, these services will allow mission teams to turn their spacecraft into additional network nodes. Researchers on Earth will be able to manipulate onboard instruments, monitor the craft’s well-being, and perhaps even route another spacecraft’s data through it.
That assumes, of course, that these spacecraft will run Internet software and use Internet protocols in deep space. And that’s something that will happen only if Goddard engineers can conquer the vociferous doubts of a team at NASA’s Jet Propulsion Laboratory.
IP Everywhere: Engineers at NASA’s Goddard Space Flight Center think existing Internet protocols can be extended deep into space. From left: Jim Rash, Keith Hogie, and David Israel.Photo: Robert Severi
Literally and figuratively, CANDOS took only a baby step, the dissenters at JPL say. It proved little. The 600 kilometers between Earth and a near-Earth orbiting space shuttle doesn’t even measure up to the distance between Paris and Prague; countless packets travel much farther than that every minute of every day. The question remains: just how far into space can the Internet Protocol reasonably go?
At NASA’s JPL, in the foothills of the San Gabriel Mountains, near Pasadena, Calif., doubts about IP’s suitability in space began in the early 1990s. Run by the California Institute of Technology, JPL plans, designs, and controls deep-space missions for NASA. The Mars rovers are JPL’s handiwork, as is the Cassini space probe now orbiting Saturn.
Like the engineers at Goddard, JPL researchers were interested in using IP to standardize telecommunications throughout the solar system. But the more they tried to shoehorn IP into the task, the more they came to doubt its practicality for deep space.
Why? The JPL researchers looked at the same basic obstacles—the handoff problem and the distance-delay problem. But where the Goddard group saw surmountable obstacles, the JPL engineers—including “Mr. IP” Cerf—saw showstoppers. Take the delay problem: the JPL crew found that no matter how they tried to readjust TCP for deep-space travel, it would not work. “Remote control is very hard when you have a 40-minute round-trip time,” says Cerf.
JPL senior staffer Adrian Hooke, along with engineers Robert Durst and Keith Scott from the nonprofit organization Mitre Corp., started work in 1997 on a set of IP-based standards to address these problems. In 1998, Cerf started helping Hooke’s team. By then, the group had been through four iterations of a modified set of Internet protocols. They all involved modifying TCP so that it would not rely on the sender and final receiver’s being in constant contact.
In 2002, a member of the JPL team, Kevin Fall, a researcher for Intel Corp.’s Berkeley Research Lab, came up with the term Delay Tolerant Networking (DTN) to describe the architecture that would be needed to address this sort of problem. The Interplanetary Internet Working Group rebranded itself the Delay Tolerant Networking Research Group and began working on drafts, submitted to the Internet Research Task Force, to describe how such a network should operate. Fall now leads the group.
A delay-tolerant network is designed to move data across rough networks—networks that have long delays and noisy connections. Central to Fall’s concept of DTN is “bundling,” a mechanism for a space network’s nodes—probes, relay satellites, and the like—to hold data if the next hop in the network is unavailable. Communications specialists call this a store-and-forward network.
This approach contrasts greatly with how nodes handle data on the TCP/IP-driven Internet. An Internet router doesn’t keep track of the packets it conveys, nor where they are going beyond the next hop. Only the computer at the endpoint of all this hopping knows that a packet has arrived (and sends the acknowledgment back through the entire chain).
A DTN router, in contrast, keeps a copy of every packet of data sent, at least until the next node has sent a message that it has received it. That scheme ensures that no data gets lost en route, even if a node is offline. Should a relay satellite along an interplanetary Internet slip behind the other side of a moon, a router on a DTN network would simply hold onto the data that needed to be transmitted until that satellite reappeared, or until another one came into position to provide the necessary hop.
While the new protocol is hugely inefficient by earthly standards, using up a lot of memory to hold duplicate copies of data and needing orders of magnitude more time to send complete messages, it is a surefire way to get data to its destination. And it has some other benefits for a device in outer space, which, after all, has other things to worry about besides communicating with Earth.
Suppose a robotic surveyor on Mars has to navigate harsh terrain, looking for rocks that might contain fossils, and then send new photos of them back to Earth—a 10- to 12-minute trip at best. If it were a node on a TCP/IP network, the robot would have to keep a copy of that data in its limited memory banks until it got a confirmation that the data had been received on Earth. Such a notice would take at least 20 minutes to arrive—more if a direct connection weren’t available. DTN, on the other hand, would require the surveyor to keep the data only until they were received by the first node—probably a nearby relay satellite. The surveyor could empty its memory banks and go back to snapping more photos within seconds.
In December, the JPL team submitted a draft for a DTN-supporting protocol called the Licklider Transmission Protocol, named in honor of Internet pioneer J.C.R. Licklider. (In 1962, Licklider jokingly nicknamed a group of researchers he was working with the “Intergalactic Computer Network.”) The Licklider Transmission Protocol would replace both IP and TCP. Once again, picturing protocols as layers in a stack, if the bottom layer is the physical wire line or radio wave connecting two devices, the Licklider Transmission Protocol sits just above that. It makes the link between two routers more reliable than IP and TCP, says JPL researcher Scott Burleigh, who coauthored the draft.
The fact that DTN Eschews IP bothers some at Goddard and elsewhere in the NASA realm. You can hear the rumblings of discontent, if only in small groups huddled at space communications conferences or in cryptically snippy e-mails posted to technical mailing lists.
“There’s been resistance to the idea of not using IP everywhere really from the beginning,” JPL’s Burleigh acknowledges. No one at NASA disputes the science behind DTN, not even its critics at Goddard. Not when Cerf, who could be expected to defend IP tenaciously, is on the JPL team. As CANDOS’s Israel concedes, “You can’t really say those guys don’t know what they are talking about.”
The Goddard concern basically boils down to this: if NASA were to choose DTN as the single architecture for its space missions, it could very well miss out on the opportunity to reuse commercially developed Internet software and hardware. It would in effect be prolonging its decades-old reliance on specialized products that cost far more or have fewer features (or both). With so much money and operability at stake, working with some version of IP is long overdue, these researchers say.
Consider President Bush’s plans for a moon base and an eventual mission to Mars. It calls for coordinating the activities of multiple systems and habitats—robots that explore planets, way stations on the moon, relay satellites that orbit planets. It cries out for an IP-based networked approach to managing communications, Goddard specialists say, adding that commercial IP could help cut costs dramatically. It would also allow scientists to run many Internet applications that DTN would render unusable. While DTN can work with IP for some things, such as file transfers, using DTN to connect two IP networks—one in space and another on Earth, for instance—breaks the end-to-end connectivity essential for running many other Internet applications.
One such set of applications involves the interfaces that control scientific instruments. Today, scientists usually have to write by hand the programs that control the instruments that carry out their experiments aboard space probes. Researchers at the NASA Glenn Research Center, in Cleveland, have developed software tools that allow scientists to place a small Web server on each instrument. Then, with a space-based IP network, the scientists would need only a Web browser on a computer to tap into that instrument—in principle, from any network connection: a university laboratory, home, or a Starbucks coffee shop. With a DTN network, on the other hand, there would be no end-to-end Internet connection all the way to the instrument, rendering it useless.
Then there’s the cost issue. Consider, for example, security software, needed so that hackers can’t invade NASA’s systems and take multimillion-dollar space probes on remote-controlled interplanetary joy rides. Such software is expensive to create but, for standard IP applications, easily purchased nowadays.
Over the years, researchers have carefully designed stripped-down security software for spacecraft communications that wraps a packet with a very thin envelope, reducing to a minimum the total number of bytes of data needed to send a photo or other information. The data savings, though, have to be weighed against the fact that a specialized solution is expensive to implement, maintain, and update, given the tiny number of vendors, compared with the commercial Internet equivalent, IPSec.
But what about the long transmission times and other challenges of space communications? At least two techniques exist for dealing with dodgy connectivity, the IP-in-space advocates say. One is the User Datagram Protocol, or UDP, which is similar to TCP but sends packets out without requiring receipts. It’s been used for years in applications such as streaming media, where losing a packet here and there is much less important than keeping up with a high-speed data exchange. In fact, the CANDOS experiment aboard the shuttle used a version of UDP, called Multicast Dissemination Protocol.
Another model is provided by a store-and-forward network millions of us use every day—the Simple Mail Transfer Protocol, in which a server holds e-mail until we retrieve it. Phil Paulsen, who manages the earth science technology office at the Glenn Research Center, was part of a NASA team that developed, with General Dynamics Corp., store-and-forward software specifically for space communications.
“We assumed that there would be these kinds of satellites where you do not have continuous connectivity,” Paulsen says. The best thing about the software? It runs over IP. But like Goddard’s Mobile IP, it has been tested only in near-Earth orbits. No one has even done the basic math to calculate if such an approach can be extended for millions of kilometers.
For his part, Cerf isn’t swayed. “They mystify me,” he says of DTN’s detractors, and generally of those who insist ordinary Internet protocols can be used in space. “My opinion is that you can’t tweak the protocols enough to make them useful and still be compatible.”
Sure, these specialists understand terrestrial Internet technologies fiendishly well. What they don’t get, Cerf says, is the “space” aspect of space communications. Space will befuddle earthbound protocols in ways most network experts rarely conceive of.
Timing, for instance. Internet routers around the globe synchronize their clocks by the millisecond in order to coordinate the flow of packets. That synchronization is far more difficult when a significant fraction of a light-year separates a spacecraft’s onboard clock from its reference clock. “The problem with the interplanetary environment is that there ain’t no such thing as ‘now,’” Cerf says.
Power, a limited resource on all NASA equipment, is another issue. Can TCP or some other protocol be tweaked so that it gets less chatty, and therefore less power-hungry, when a craft’s fuel cells run down? The JPL team doesn’t think so.
IP everywhere or not? For outsiders, such emotional differences in philosophy may seem like a NASA divided. For the space agency, however, it’s just good science: have researchers beat on a problem from different angles and let the best solution win out.
Sitting quietly on the sidelines, waiting to evaluate all this work, are the NASA architects who must design the ambitious space fleet President Bush envisions. Over the next year, NASA’s Space Communications Architecture Working Group will start planning the basic communications design for the Crew Exploration Vehicle. “Our job with technology is to figure out where it fits in with the evolving architecture,” says its chairman, John Rush.
Rush has heard from both sides of the IP debate. He admits to liking the IP folks’ idea of “the scientist sitting in his laboratory and communicating with his instruments on a spacecraft.” But he’s also well aware of the challenges. Over the next 10 months, JPL, Goddard, and Glenn will make their cases to the working group, through presentations and white papers.
One thing is certain: NASA is moving toward a networked model of interplanetary communications. Long gone are the days of dedicating a communications link to any one craft. The question now is how to build a network for a sky full of orbiters, shuttles, surveyors, and other spacecraft—the equipment that will be our eyes and ears to the universe. We want to make sure we get the best view possible.
Joab Jackson is an associate writer forGovernment Computer News.
Some of NASA’s work on IP in space is documented online at http://ipinspace.gsfc.nasa.gov. The progress of the Delay Tolerant Networking Research Group can be followed at its Web site, http://www.dtnrg.org. A tutorial is at http://www.dtnrg.org/docs/tutorials/warthman-1.1.pdf.