Geneva, 25 April 2005 – Today, in a significant milestone for scientific grid computing, eight major computing centres successfully completed a challenge to sustain a continuous data flow of 600 megabytes per second (MB/s) on average for 10 days from CERN1 in Geneva, Switzerland to seven sites in Europe and the US. The total amount of data transmitted during this challenge—500 terabytes—would take about 250 years to download using a typical 512 kilobit per second household broadband connection.
Geneva, 25 April 2005 – Today, in a significant milestone for scientific grid computing, eight major computing centres successfully completed a challenge to sustain a continuous data flow of 600 megabytes per second (MB/s) on average for 10 days from CERN1 in Geneva, Switzerland to seven sites in Europe and the US. The total amount of data transmitted during this challenge—500 terabytes—would take about 250 years to download using a typical 512 kilobit per second household broadband connection.
This exercise was part of a series of service challenges designed to test the global computing infrastructure for the Large Hadron Collider (LHC) currently being built at CERN to study the fundamental properties of subatomic particles and forces. The service challenge participants included Brookhaven National Laboratory2 and Fermi National Accelerator Laboratory (Fermilab)3 in the US, Forschungszentrum Karlsruhe4 in Germany, CCIN2P35 in France, INFN-CNAF6 in Italy, SARA/NIKHEF7 in the Netherlands and Rutherford Appleton Laboratory8 in the UK.
"This service challenge is a key step on the way to managing the torrents of data anticipated from the LHC," said Jamie Shiers, manager of the service challenges at CERN. "When the LHC starts operating in 2007, it will be the most data-intensive physics instrument on the planet, producing more than 1500 megabytes of data every second for over a decade."
The goal of LHC computing is to use a world-wide grid infrastructure of computing centres to provide sufficient computational, storage and network resources to fully exploit the scientific potential of the four major LHC experiments: ALICE, ATLAS, CMS and LHCb. The infrastructure relies on several national and regional science grids. The service challenge used resources from the LHC Computing Grid (LCG) project, the Enabling Grids for E-SciencE (EGEE) project, Grid3/Open Science Grid (OSG), INFNGrid and GridPP.
LHC scientists designed a series of service challenges to ramp up to the level of computing capacity, reliability and ease of use that will be required by the worldwide community of over 6000 scientists working on the LHC experiments. During LHC operation, the major computing centres involved in the Grid infrastructure will collectively store the data from all four LHC experiments. Scientists working at over two hundred other computing facilities in universities and research laboratories around the globe, where much of the data analysis will be carried out, will access the data via the Grid.
Fermilab Computing Division head Vicky White welcomed the results of the service challenge.
"High energy physicists have been transmitting large amounts of data around the world for years," White said. "But this has usually been in relatively brief bursts and between two sites. Sustaining such high rates of data for days on end to multiple sites is a breakthrough, and augurs well for achieving the ultimate goals of LHC computing."
NIKHEF physicist and Grid Deployment Board chairman Kors Bos concurred.
"The challenge here is not just the inherently distributed nature of the Grid infrastructure for the LHC," Bos said, "but also the need to get large numbers of institutes and individuals, all with existing commitments, to work together on an incredibly aggressive timescale."
The current service challenge is the second in a series of four leading up to LHC operations in 2007. It exceeded expectations by sustaining roughly one-third of the ultimate data rate from the LHC, and reaching peak rates of over 800 MB/s. This success was facilitated by the underlying high-speed networks, including DFN, GARR, GEANT, ESnet, LHCnet, NetherLight, Renater, and UKLight.
The next service challenge, due to start in the summer, will extend to many other computing centres and aim at a three-month period of stable operations. That challenge will allow many of the scientists involved to test their computing models for handling and analyzing the data from the LHC experiments.
Notes for Editors
1 CERN, the European Organization for Nuclear Research, has its headquarters in Geneva. At present, its Member States are Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, the Netherlands, Norway, Poland, Portugal, the Slovak Republic, Spain, Sweden, Switzerland and the United Kingdom. India, Israel, Japan, the Russian Federation, the United States of America, Turkey, the European Commission and UNESCO have Observer status.
2 Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security, and builds and operates major scientific facilities available to university, industry and government researchers. BNL is operated and managed for the U.S. Department of Energy’s Office of Science by Brookhaven Science Associates, a limited-liability company founded by Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit applied science and technology organization.
3 Fermi National Accelerator Laboratory is located in Batavia, Illinois, USA. Fermilab is operated by Universities Research Association, Inc., a consortium of 90 research universities, for the United States Department of Energy’s Office of Science.
4 Forschungszentrum Karlsruhe, a member of the Helmholtz Gemeinschaft Deutscher Forschungszentren (HGF), constructs and operates the GridKa computing center for the German particle physics community and is the designated German Tier 1 for the LHC.
5 CCIN2P3, the Computing Center of the National Institute of nuclear physics and particle physics is located in Lyon, France. Its main mission is to provide computing resources and storage of experimental data to the physicists of the Institute involved in the major experiments of the discipline and particularly in international collaborations. In the field of grid computing, CCIN2P3 is one of the leaders of the French grid effort and is deeply involved in the main European grid projects for science.
6 INFN-CNAF is the National Center for Research and Development in Technology, Computer Science and Data Transmission of INFN (Istituto Nazionale di Fisica Nucleare), and is the major computing facility of the INFN grid infrastructure. INFN, Italy’s national nuclear physics institute, supports, coordinates and carries out scientific research in sub-nuclear, nuclear and astroparticle physics and is involved in developing relevant technologies and a significant outreach program.
7 SARA is the National Center for Computing and Networking Services and NIKHEF is the National Institute for Nuclear Physics and High Energy Physics in the Netherlands. The two institutes have joined forces to become an important LHC data storage and analysis center. The Advanced Internet Research Group of the University of Amsterdam has strongly contributed to this Service Challenge in manpower and equipment.
8 The UK Council for the Central Laboratory of the Research Councils (CCLRC) works with the other UK research councils to set future priorities that meet UK science needs. It also operates three world class research centres: the Rutherford Appleton Laboratory in Oxfordshire, the Daresbury Laboratory in Cheshire and the Chilbolton Observatory in Hampshire. These world-class institutions support the research community by providing access to advanced facilities and an extensive scientific and technical expertise. RAL is one of the collaborating institutions in the GridPP project, the UK’s contribution to the LHC Computing Grid project.
Web sites of international grid projects involved in the service challenge:
LHC Computing Grid (LCG) project: http://www.cern.ch/lcg/
Enabling Grids for E-SciencE (EGEE): http://public.eu-egee.org/
Grid3: http://www.ivdgl.org/grid3/
GridPP: http://www.gridpp.ac.uk/
INFNGrid: http://grid.infn.it/
Open Science Grid (OSG): http://www.opensciencegrid.org/
For more information contact:
Francois Grey |
Gaëlle Shifrin |
Mona Rowe |
Barbara Gallavotti |
Katie Yurkewicz |
Kors Bos |
Holger Marten |
Natalie Bealing MCIPR |