Wicked Fast Data Transport

News

My prior piece on Data Expedition got some responses, including messages from Chin Fang from Zettar.com about their work on high speed data transfer, particularly for high performance computing (HPC) and big science applications. Following is a bit on what they have been able to accomplish with their multi-level parallel data transfer solution.

In October 2018 the Energy Sciences Network, run by the U.S. Department of Energy (DOE), witnessed a world record 1 PB of data transferred in 29 hours by the SLACX National Accelerator Center in Palo Alto, CA and Zettar, Inc., across a 5,000-mile network loop operated by ESnet (http://es.net/news-and-publications/esnet-news/2018/esnets-network-software-help-slac-researchers-in-record-setting-transfer-of-1-petabyte-of-data/). The simplified network map below shows the sites that ESnet serves, the structure of the network and the traffic load at the time of the image. The dark red color on the southern links of the network show data transfers exceeding 50 Gbps.

DOE ESnet MapImage from DOE Press Release

According to the release, the project is aimed at achieving the high data transfer rates needed to accommodate the amount of data to be generated by the Linac Coherent Light Source II (LCLS II), which is expected to come online in 2020. The LCLS is the world’s first hard X-ray free-electron laser (XFEL) and its strobe-like pulses are just a few millionths of a billionth of a second long, and a billion times brighter than previous X-ray sources. LCLS II will provide a major jump in capability – moving from 120 pulses per second to 1 million pulses per second. Scientists use LCLS to take crisp pictures of atomic motions, watch chemical reactions unfold, probe the properties of materials and explore fundamental processes in living things.

SLAC is planning to generate data transfers of multiple terabits per second, transferring experimental results from SLAC to the DOE’s supercomputing facilities for storage and analysis. ESnet carries data between universities and DOE’s national laboratories and national user facilities along a multi-100 gbps backbone network.

The graph below shows that the petabyte transfer trial accounted for on-third of the total network traffic on ESnet during the 29 hour run.

1 PD transfer in 29 hours consumed 1/3 of ESnet BandwidthImage from DOE Press Release

According to the October release, Zettar, is a National Science Foundation-funded software firm in Palo Alto, develops hyperscale data distribution software solution capable of multi-100+Gbpstransfer rates and collaborates with ESnet and DOE national labs. For the trial, the team used ESnet’s On-Demand Secure Circuits and Advance Reservation System (OSCARS) to reserve the bandwidth for the run.

Chin Fang told me that a lot of their transfer capability is gated by storage interface bottlenecks. As a consequence, he is very interested in NVMe and other high performance storage advances using solid state storage.

That certainly is a fast transfer of data and Chin Fang says that they can achieve 100 Gbps data transfers for other customers using leased lines. Anybody doing faster transfers than this?

Articles You May Like

Earth is ‘missing’ at least 20 ft of sea level rise. Antarctica could be the time bomb
Russia is preparing to disconnect the entire country from the internet
NASA finally lets go of its Opportunity Rover after 15 years on Mars
Pentagon to propose a lean Space Force
Researchers made an AI whose performance increases if they let it sleep and dream

Leave a Reply

Your email address will not be published. Required fields are marked *