Документ взят из кэша поисковой машины. Адрес
оригинального документа
: http://www.adass.org/adass/proceedings/adass97/shopbellp.html
Дата изменения: Fri May 15 22:17:30 1998 Дата индексирования: Tue Oct 2 03:04:48 2012 Кодировка: Поисковые слова: п п п п п п п п п п п п п п п п п п п р п р п р п р п р п р п р п |
Next: NICMOS Software: An Observation Case Study
Up: Dataflow and Scheduling
Previous: OPUS-97: A Generalized Operational Pipeline System
Table of Contents -- Index -- PS reprint -- PDF reprint
P.L.Shopbell, J.G.Cohen, and L. Bergman1
California Institute of Technology, Pasadena, CA. 91125
1Jet Propulsion Laboratory, California Institute of Technology
The ground network in California connects Caltech with JPL, the site of the ACTS ground station. This segment was established as part of Pacific Bell's extant CalREN fiber optic network and has proved to be the most reliable portion of our network.
The satellite connection was made available to us through a grant from NASA as part of the Gigabit Satellite Network (GSN) testbed program. NASA's Advanced Communications Technology Satellite (ACTS) was built to explore new modes of high speed data transmission at rates up to OC-12 (622 Mbit/sec). The 20-30 GHz frequency band has been employed for the first time by a communications satellite, with extensive rain fade compensation.
The ground network in Hawaii, which connects Keck observatory with the other ACTS ground station at Tripler Army Medical Center in Honolulu has been somewhat more complex in its evolution. This was primarily due to the relative inexperience of GTE Hawaiian Telephone and a lack of prior infrastructure in Hawaii. This segment initially consisted of a combination of underwater fiber, microwave antennae, and buried fiber. The higher bit error rates (BER) of the non-fiber segment produced noticeable instability in the end-to-end network. Fortunately, in January of 1997 this portion of the ground network in Hawaii was upgraded to optical fiber. The improved performance for high-speed data transfers of the final all-fiber network was immediately apparent.
In order to support standard higher-level (IP) networking protocols, we installed an Asynchronous Transfer Mode (ATM) network over this infrastructure. The transfer of 53-byte ATM cells is performed by hardware switches throughout the network, at speeds of OC-1 (51 Mbit/sec) and above. Several vendors have supplied the ATM switches and Network Interface Cards (NICs), providing a stringent test of compatibility in the relatively new ATM environment. Although we have encountered several interoperability problems, none have been serious, and the ATM and telephone vendors have been extremely helpful.
In order to facilitate reliable data transfer, as well as to allow the use of the wealth of software tools already available, we are running the standard IP protocols over ATM using a pseudo-standard implementation known as ``Classical IP''. This enables the use of the standard network-based applications that are in widespread use on the Internet. Tools such as ftp and telnet are part of every observing run, as are additional special-purpose applications, such as an audio conferencing tool (rat) and a shared whiteboard tool (wb).
Fortunately, this problem is well-known in the high-speed networking community. Networks such as ours are known as ``long fat networks'' (LFN; see RFC 1323 ). In the case of the SunOS operating system (to which we are constrained by legacy control software at Keck), we obtained the TCP-LFN package from Sun Consulting, which purports to support the RFC 1323 extensions. Unfortunately, a number of limitations of SunOS 4.1.4 conspire to prohibit one from obtaining extremely large window sizes, regardless of the TCP-LFN software. In our case, the compiled-in kernel limit of 2 Mbytes of Mbuf memory (i.e., IP packet wrappers) turned out to be the major constraint, limiting our window size to no more than 1 Mbyte. Indeed, our final tuned network delivered the expected maximum TCP/IP performance of approximately 15 Mbit/sec ( of OC-1). Although perhaps disappointing in a relative sense, this bandwidth is far in excess of T1 Ethernet speed (1.44 Mbit/sec) and allows an 8 Mbyte image to be transferred in approximately 5 seconds. As a further comparison, this bandwidth exceeds by 50% that which is available on the local area Ethernet network at the Keck Telescope itself. Figure 2 illustrates typical bandwidth measurements of our network for UDP and TCP, the latter before and after the network was upgraded to fiber in Hawaii. While network performance was perhaps not at the level desired, due to developing infrastructure in Hawaii and idiosyncrasies within the operating system, issues of network reliability had far greater impact on our remote observing operation. The experimental and limited nature of the ACTS program created a number of difficulties which one would almost certainly not face if using a more developed and/or commercial satellite system. The impact of the reliability issue is that at least one observer must be sent to Hawaii to use the telescope, in case of ACTS-related problems.
Reliability issues aside, we have demonstrated that true remote observing over high-speed networks provides several important advantages over standard observing paradigms. Technical advantages include more rapid download of data and the opportunity for alternative communication facilities, such as audio- and videoconferencing. Scientific benefits include involving more members of observing teams while decreasing expenses, enhancing real-time data analysis of observations by persons not subject to altitude-related conditions, and providing facilities, expertise, and personnel not normally available at the observing site.
Due to the limited scope of the ACTS project, future work from the standpoint of Keck Observatory will be concerned with establishing a more permanent remote observing facility via a ground-based network. At least two projects are under way in this direction: remote observing from the Keck Headquarters in Waimea, from where up to 75% of observing is now performed every month, and remote observing from multiple sites on the U.S. mainland using a slower T1 connection (Conrad et al. 1997, SPIE Proc. 3112). Trial tests of this latter approach over the Internet have been extremely promising.
Next: NICMOS Software: An Observation Case Study
Up: Dataflow and Scheduling
Previous: OPUS-97: A Generalized Operational Pipeline System
Table of Contents -- Index -- PS reprint -- PDF reprint