When the LHC begins operations, it will produce roughly 15 petabytes (15 million gigabytes) of data annually – enough to fill 100 000 DVDs a year!
Thousands of scientists around the world will want to access and analyse this data, so CERN is building a distributed computing and data storage infrastructure: the LHC Computing Grid (LCG). The data from the LHC experiments will be distributed around the globe, with a primary backup recorded on tape at CERN. After initial processing, this data will be distributed to a series of large computer centres with sufficient storage capacity for a large fraction of the data, and with round-the-clock support for the Grid.
These centres will make the data available to other facilities, each consisting of one or several collaborating computing centres for specific analysis tasks. Individual scientists will access these facilities through resources such as local clusters in a university department or even individual PCs, and which may be allocated to the LCG on a regular basis.
LCG collaborates closely with the other CERN Grid projects:
Enabling Grids for E-SciencE (EGEE): LCG is the primary production environment for this project, which started in April 2004 and aims to establish a Grid infrastructure for a wide range of scientific domains.
CERN openlab: The LCG project is also following developments in industry, in particular through the CERN openlab, where leading IT companies are testing and validating cutting-edge Grid technologies using the LCG environment.
Håndværker - colic-help - skateboard - cambodia - taliban