Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Table of contents

Volume 396

2012

Previous issue Next issue

Event Processing

Accepted papers received: 19 November 2012
Published online: 13 December 2012

022001
The following article is Open access

, , , , , and

The FairRoot framework is an object oriented simulation, reconstruction and data analysis framework based on ROOT. It includes core services for detector simulation and offline analysis. The framework delivers base classes which enable the users to easily construct their experimental setup in a fast and convenient way. By using the Virtual Monte Carlo concept it is possible to perform the simulations using either Geant3 or Geant4 without changing the user code or the geometry description. Using and extending the task mechanism of ROOT it is possible to implement complex analysis tasks in a convenient way. Moreover, using the FairCuda interface of the framework it is possible to run some of these tasks also on GPU. Data IO, as well as parameter handling and data base connections are also handled by the framework. Since some of the experiments will not have an experimental setup with a conventional trigger system, the framework can handle also free flowing input streams of detector data. For this mode of operation the framework provides classes to create the needed time sorted input streams of detector data out of the event based simulation data. There are also tools to do radiation studies and to visualize the simulated data. A CMake-CDash based building and monitoring system is also part of the FairRoot services which helps to build and test the framework on many different platforms in an automatic way, including also Continuous Integration.

022002
The following article is Open access

, , and

iSpy is a general-purpose event data and detector visualization program that was developed as an event display for the CMS experiment at the LHC and has seen use by the general public and teachers and students in the context of education and outreach. Central to the iSpy design philosophy is ease of installation, use, and extensibility.

The application itself uses the open-access packages Qt4 and Open Inventor and is distributed either as a fully-bound executable or a standard installer package: one can simply download and double-click to begin. Mac OSX, Linux, and Windows are supported. iSpy renders the standard 2D, 3D, and tabular views, and the architecture allows for a generic approach to production of new views and projections.

iSpy reads and displays data in the ig format: event information is written in compressed JSON format files designed for distribution over a network. This format is easily extensible and makes the iSpy client indifferent to the original input data source. The ig format is the one used for release of approved CMS data to the public.

022003
The following article is Open access

The CMS simulation, based on the Geant4 toolkit, has been operational within the new CMS software framework for more than four years. The description of the detector including the forward regions has been completed and detailed investigation of detector positioning and material budget has been carried out using collision data. Detailed modeling of detector noise has been performed and validated with the collision data. In view of the high luminosity runs of the Large Hadron Collider, simulation of pile-up events has become a key issue. Challenges have raised from the point of view of providing a realistic luminosity profile and modeling of out-of-time pileup events, as well as computing issues regarding memory footprint and IO access. These will be especially severe in the simulation of collision events for the LHC upgrades; a new pileup simulation architecture has been introduced to cope with these issues.

The CMS detector has observed anomalous energy deposit in the calorimeters and there has been a substantial effort to understand these anomalous signal events present in the collision data. Emphasis has also been given to validation of the simulation code including the physics of the underlying models of Geant4. Test beam as well as collision data are used for this purpose. Measurements of mean response, resolution, energy sharing between the electromagnetic and hadron calorimeters, shower shapes for single hadrons are directly compared with predictions from Monte Carlo. A suite of performance analysis tools has been put in place and has been used to drive several optimizations to allow the code to fit the constraints posed by the CMS computing model.

022004
The following article is Open access

, and

A comprehensive analysis of the effects of Geant4 algorithms for condensed transport in detectors is in progress. The first phase of the project focuses on electron multiple scattering, and studies two related observables: the longitudinal pattern of energy deposition in various materials, and the fraction of backscattered particles. The quality of the simulation is evaluated through comparison with high precision experimental measurements; several versions of Geant4 are analyzed to provide an extensive overview of the evolution of Geant4 multiple scattering algorithms and of their contribution to simulation accuracy.

022005
The following article is Open access

The CMS all-silicon tracker consists of 16588 modules. Therefore its alignment procedures require sophisticated algorithms. Advanced tools of computing, tracking and data analysis have been deployed for reaching the targeted performance. Ultimate local precision is now achieved by the determination of sensor curvatures, challenging the algorithms to determine about 200k parameters simultaneously. Systematic biases in the geometry are controlled by adding further information into the alignment workflow, e.g. the mass of decaying resonances. The orientation of the tracker with respect to the magnetic field of CMS is determined with a stand-alone chi-square minimization procedure. The geometries are finally carefully validated. The monitored quantities include the basic track quantities for tracks from both collisions and cosmic muons and physics observables.

022006
The following article is Open access

, , , , and

We detail recent changes to ROOT-based I/O within the ATLAS experiment. The ATLAS persistent event data model continues to make considerable use of a ROOT I/O backend through POOL persistency. Also ROOT is used directly in later stages of analysis that make use of a flat-ntuple based "D3PD" data-type. For POOL/ROOT persistent data, several improvements have been made including implementation of automatic basket optimisation, memberwise streaming, and changes to split and compression levels. Optimisations have also been made for the D3PD format. We present a full evaluation of the resulting performance improvements from these, including in the case of selected retrieval of events. We also evaluate ongoing changes internal to ROOT, in the ATLAS context, for both POOL and D3PD data. We report results not only from test systems, but also utilising new automated tests on real ATLAS production resources which employ a wide range of storage technologies.

022007
The following article is Open access

Due to their production at the early stages, heavy flavor particles are of interest to study the properties of the matter created in heavy ion collisions at RHIC. Previous measurements of D and B mesons at RHIC[1] using semi-leptonic probes show a suppression similar to that of light quarks, which is in contradiction with theoretical models only including gluon radiative energy loss mechanism. A direct topological reconstruction is then needed to obtain a precise measurement of charm meson decays. This method leads to a substantial combinatorial background which can be reduced by using modern multivariate techniques (TMVA) which make optimal use of all the information available. Comparison with classical methods and performances of some classifiers will be presented for the reconstruction of D0 decay vertex (D0Kπ+) and its charge conjugate from Au+Au collisions at √SNN = 200 GeV.

022008
The following article is Open access

The online event selection is crucial to reject most of the events containing uninteresting background collisions while preserving as much as possible the interesting physics signals. The b-jet selection is part of the trigger strategy of the ATLAS experiment and a set of dedicated triggers is in place from the beginning of the 2011 data-taking period and is contributing to keep the total bandwidth to an acceptable rate. These triggers are used in many physics analyses, especially those with topologies containing more than one b-jet where higher rejection factors are achieved that benefit from requesting this trigger to be fired. An overview of the b-jet trigger menu and the performance on data is presented in this contribution.

022009
The following article is Open access

, and

In this paper we describe a software package which was developed to describe the ATLAS muon spectrometer. The package is based on a generic XML detector description (ATLAS Generic Detector Description, AGDD), and is used in the PERSINT visualization program and in a series of parsers, or converters which build a generic, transient geometry model which can be translated into commonly used geometry descriptions like Geant4, the ATLAS GeoModel, ROOT TGeo or others. The system presented allows for an easy, self descriptive approach to the detector description problem, for intuitive visualization and rapid turn-around: indeed, the results of the description process can be immediately fed into e.g. a Geant4 simulation for rapid prototyping. Examples of the current usage for the ATLAS detector description will be given and further developments needed to meet future requirements.

022010
The following article is Open access

, , , , , , , , , et al

The SuperB asymmetric-energy e+e collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavour sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75ab−1 and a luminosity target of 1036cm−2s−1.

These parameters require a substantial growth in computing requirements and performances. The SuperB collaboration is thus investigating the advantages of new CPU architectures (multi and many cores) and how to exploit their capability of task parallelization in the framework for simulation and analysis software. In this work we present the underlying architecture which we intend to use and some preliminary performance results of the first framework prototype.

022011
The following article is Open access

The estimation of the compatibility of large amounts of histogram pairs is a recurrent problem in high energy physics. The issue is common to several different areas, from software quality monitoring to data certification, preservation and analysis. Given two sets of histograms, it is very important to be able to scrutinize the outcome of several goodness of fit tests, obtain a clear answer about the overall compatibility, easily spot the single anomalies and directly access the concerned histogram pairs. This procedure must be automated in order to reduce the human workload, therefore improving the process of identification of differences which is usually carried out by a trained human mind. Some solutions to this problem have been proposed, but they are experiment specific. RelMon depends only on ROOT and offers several goodness of fit tests (e.g. chi-squared or Kolmogorov-Smirnov). It produces highly readable web reports, in which aggregations of the comparisons rankings are available as well as all the plots of the single histogram overlays. The comparison procedure is fully automatic and scales smoothly towards ensembles of millions of histograms. Examples of RelMon utilisation within the regular workflows of the CMS collaboration and the advantages therewith obtained are described. Its interplay with the data quality monitoring infrastructure is illustrated as well as its role in the QA of the event reconstruction code, its integration in the CMS software release cycle process, CMS user data analysis and dataset validation.

022012
The following article is Open access

and

In this paper, we present the High Energy Physics data format, processing toolset and analysis library A4, providing fast I/O of structured data using the Google protocol buffer library. The overall goal of A4 is to provide physicists with tools to work efficiently with billions of events, providing not only high speeds, but also automatic metadata handling, a set of UNIX-like tools to operate on A4 files, and powerful and fast histogramming capabilities. At present, A4 is an experimental project, but it has already been used by the authors in preparing physics publications. We give an overview of the individual modules of A4, provide examples of use, and supply a set of basic benchmarks. We compare A4 read performance with the common practice of storing unstructured data in ROOT trees. For the common case of storing a variable number of floating-point numbers per event, speedups in read speed of up to a factor of six are observed.

022013
The following article is Open access

, , , , , , , , , et al

An overview of the current status of electromagnetic physics (EM) of the Geant4 toolkit is presented. Recent improvements are focused on the performance of large scale production for LHC and on the precision of simulation results over a wide energy range. Significant efforts have been made to improve the accuracy without compromising of CPU speed for EM particle transport. New biasing options have been introduced, which are applicable to any EM process. These include algorithms to enhance and suppress processes, force interactions or splitting of secondary particles. It is shown that the performance of the EM sub-package is improved. We will report extensions of the testing suite allowing high statistics validation of EM physics. It includes validation of multiple scattering, bremsstrahlung and other models. Cross checks between standard and low-energy EM models have been performed using evaluated data libraries and reference benchmark results.

022014
The following article is Open access

, , and

Detector simulation is one of the most CPU intensive tasks in modern High Energy Physics. While its importance for the design of the detector and the estimation of the efficiency is ever increasing, the amount of events that can be simulated is often constrained by the available computing resources. Various kind of "fast simulations" have been developed to alleviate this problem, however, while successful, these are mostly "ad hoc" solutions which do not replace completely the need for detailed simulations. One of the common features of both detailed and fast simulation is the inability of the codes to exploit fully the parallelism which is increasingly offered by the new generations of CPUs. In the next years it is reasonable to expect an increase on one side of the needs for detector simulation, and on the other in the parallelism of the hardware, widening the gap between the needs and the available means. In the past years, and indeed since the beginning of simulation programs, several unsuccessful efforts have been made to exploit the "embarrassing parallelism" of simulation programmes. After a careful study of the problem, and based on a long experience in simulation codes, the authors have concluded that an entirely new approach has to be adopted to exploit parallelism. The paper will review the current prototyping work, encompassing both detailed and fast simulation use cases. Performance studies will be presented, together with a roadmap to develop a new full-fledged transport program efficiently exploiting parallelism for the physics and geometry computations, while adapting the steering mechanisms to accommodate detailed and fast simulation in a single framework.

022015
The following article is Open access

and

The conversion of photons into electron-positron pairs in the detector material is a nuisance in the event reconstruction of high energy physics experiments, since the measurement of the electromagnetic component of interaction products results degraded. Nonetheless this unavoidable detector effect can also be extremely useful. The reconstruction of photon conversions can be used to probe the detector material and to accurately measure soft photons that come from radiative decays in heavy flavor physics. In fact a converted photon can be measured with very high momentum resolution by exploiting the excellent reconstruction of charged tracks of a tracking detector as the one of CMS at LHC. The main issue is that photon conversion tracks are difficult to reconstruct for standard reconstruction algorithms. They are typically soft and very displaced from the primary interaction vertex. An innovative seeding technique that exploits the peculiar photon conversion topology, successfully applied in the CMS track reconstruction sequence, is presented. The performances of this technique and the substantial enhancement of photon conversion reconstruction efficiency are discussed. Application examples are given.

022016
The following article is Open access

, , , , , , , , and

The high data rates at the LHC necessitate the use of biasing selections already at the trigger level. Consequently, the correction of the biases induced by these selections becomes one of the main challenges for analyses. This paper presents the LHCb implementation of a data driven method for extracting such biases which entirely avoids uncertainties associated with detector simulation. Its novelty lies in the LHCb trigger which is implemented entirely in software, allowing its decisions to be reproduced in an exact manner offline. It is demonstrated that this method allows the control of selection biases to better than 0.1%, and that it greatly enhances the range of physics which can be performed by the LHCb experiment. The implications of trigger and software architectures on the long term viability of this method, in particular with respect to the reproducibility of trigger decisions when running the same code on different underlying hardware or compilers, is discussed.

022017
The following article is Open access

, , and

The analysis of the complex LHC data usually follows a standard path that aims at minimizing not only the amount of data but also the number of observables used. After a number of steps of slimming and skimming the data, the remaining few terabytes of ROOT files hold a selection of the events and a flat structure for the variables needed that can be more easily inspected and traversed in the final stages of the analysis. PROOF arises at this point as an efficient mechanism to distribute the analysis load by taking advantage of all the cores in modern CPUs through PROOF Lite, or by using PROOF Cluster or PROOF on Demand tools to build dynamic PROOF cluster on computing facilities with spare CPUs. However using PROOF at the level required for a serious analysis introduces some difficulties that may scare new adopters. We have developed the PROOF Analysis Framework (PAF) to facilitate the development of new analysis by uniformly exposing the PROOF related configurations across technologies and by taking care of the routine tasks as much as possible. We describe the details of the PAF implementation as well as how we succeeded in engaging a group of CMS physicists to use PAF as their daily analysis framework.

022018
The following article is Open access

As particle detectors and physics analyses become more complex, the need for more detailed simulations of the detector response becomes increasingly important. Traditionally, two-dimensional engineering drawings have been interpreted by physicists who then implement approximations to the individual parts in software simulation packages. This process requires a high level of experience in both the abstraction of the physical volumes as well as the implementation in code, and is prone to error. mesh2gdml is a program which allows three-dimensional tesselated geometrical volumes to be converted into a format which the Geant4 simulation toolkit can import directly. This provides a pathway for simulating the response of detector elements which have been designed in CAD programs without having to write any code.

022019
The following article is Open access

Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) and the ISO PRC file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. Until recently, Adobe's Acrobat software was also capable of incorporating 3D content into PDF files from a variety of 3D file formats, including proprietary CAD formats. However, this functionality is no longer available in Acrobat X, having been spun off to a separate company. Incorporating 3D content now requires the additional purchase of a separate plug-in. In this talk we present alternatives based on open source libraries which allow the programmatic creation of 3D content in PDF format. While not providing the same level of access to CAD files as the commercial software, it does provide physicists with an alternative path to incorporate 3D content into PDF files from such disparate applications as detector geometries from Geant4, 3D data sets, mathematical surfaces or tesselated volumes.

022020
The following article is Open access

, , , , and

Future "Intensity Frontier" experiments at Fermilab are likely to be conducted by smaller collaborations, with fewer scientists, than is the case for recent "Energy Frontier" experiments. art is a C++ event-processing framework designed with the needs of such experiments in mind. An evolution from the framework of the CMS experiment, art was designed and implemented to be usable by multiple experiments without imposing undue maintenance effort requirements on either the art developers or experiments using it. We describe the key requirements and features of art and the rationale behind evolutionary changes, additions and simplifications with respect to the CMS framework. In addition, our package distribution system and our collaborative model with respect to the multiple experiments using art helps keep the maintenance burden low. We also describe in-progress and future enhancements to the framework, including strategies we are using to allow multi-threaded use of the art framework in today's multi- and many-core environments.

022021
The following article is Open access

and

Three-dimensional image reconstruction in medical applications (PET or X-ray CT) utilizes sophisticated filter algorithms to linear trajectories of coincident photon pairs or x-rays. The goal is to reconstruct an image of an emitter density distribution. In a similar manner, tracks in particle physics originate from vertices that need to be distinguished from background track combinations. In this study it is investigated if vertex reconstruction in high energy proton collisions may benefit from medical imaging methods. A new method of vertex finding, the Medical Imaging Vertexer (MIV), is presented based on a three-dimensional filtered backprojection algorithm. It is compared to the open-source RAVE vertexing package. The performance of the vertex finding algorithms is evaluated as a function of instantaneous luminosity using simulated LHC collisions. Tracks in these collisions are described by a simplified detector model which is inspired by the tracking performance of the LHC experiments. At high luminosities (25 pileup vertices and more), the medical imaging approach finds vertices with a higher efficiency and purity than the RAVE "Adaptive Vertex Reconstructor" algorithm. It is also much faster if more than 25 vertices are to be reconstructed because the amount of CPU time rises linearly with the number of tracks whereas it rises quadratically for the adaptive vertex fitter AVR.

022022
The following article is Open access

, and

The line between native and web applications is becoming increasingly blurred as modern web browsers are becoming powerful platforms on which applications can be run. Such applications are trivial to install and are readily extensible and easy to use. In an educational setting, web applications permit a way to deploy deploy tools in a highly-restrictive computing environment.

The I2U2 collaboration has developed a browser-based event display for viewing events in data collected and released to the public by the CMS experiment at the LHC. The application itself reads a JSON event format and uses the JavaScript 3D rendering engine pre3d. The only requirement is a modern browser using HTML5 canvas. The event display has been used by thousands of high school students in the context of programs organized by I2U2, QuarkNet, and IPPOG. This browser-based approach to display of events can have broader usage and impact for experts and public alike.

022023
The following article is Open access

, , and

Linear Energy Transfer (LET) is a measure of the energy transferred into a material as an ionizing particle passes through it. This quantity is useful in estimating the biological effects of ionizing radiation as expressed in dosimetric endpoints such as Dose-equivalent. Pixel detectors with silicon sensors –like the Medipix2 Collaboration's Timepix-based devices– are ideal instruments to measure the total energy deposited by a transiting ionizing particle. In this paper we propose an approach for determining the amount of LET from track images obtained with a Timepix-based Si pixel detector. In particular, we have developed a method to calculate the angle of incidence for a heavy ion particle as it passes through a 300 μm thick Si sensor layer based on an analysis of the information in the cluster of pixel hits. Using that angle information, the path length traversed by the particle can be computed, which then facilitates estimating the degree of LET. Results from experiments with data taken at the HIMAC (Heavy Ion Medical Accelerator) facility in Chiba, Japan, and NASA Space Radiation Laboratory at Brookhaven in USA, demonstrate the effectiveness and resolution of our method to determine the angle of incidence and LET of heavy ion particles.

022024
The following article is Open access

The Virtual Monte Carlo (VMC) [1] provides the abstract interface to the Monte Carlo transport codes: GEANT 3.21 [2], Geant4 [3], and FLUKA [4]. The user VMC based application, independent from the specific Monte Carlo codes, can be then run with all supported simulation programs. VMC has been developed by the ALICE Offline Project and it has drawn attention in other experimental frameworks.

Since its first release in 2002, the implementation of the VMC for Geant4 (Geant4 VMC) has been continuously maintained and developed, driven by the evolution of Geant4 on one side and the requirements from users on the other side. In this paper we report on new features in this tool, we present its development multi-threading version based on the Geant4 MT prototype [5] as well as the time comparisons of equivalent native Geant4 and VMC test applications.

022025
The following article is Open access

The Tier-0 processing system is the initial stage of the multi-tiered computing system of CMS. It is responsible for the first processing steps of data from the CMS Experiment at CERN. This presentation covers the complete overhaul (rewrite) of the system for the 2012 run, to bring it into line with the new CMS Workload Management system, improving scalability and maintainability for the next few years.

022026
The following article is Open access

, , , , , and

Recent PC servers are equipped with multi-core CPUs and it is desired to utilize the full processing power of them for the data analysis in large scale HEP experiments. A software framework basf2 is being developed for the use in the Belle II experiment, a new generation B-factory experiment at KEK, and the parallel event processing to utilize the multi-core CPUs is in its design for the use in the massive data production. The details of the implementation of event parallel processing in the basf2 framework are discussed with the report of preliminary performance study in the realistic use on a 32 core PC server.

022027
The following article is Open access

Traditionally, HEP experiments exploit the multiple cores in a CPU by having each core process one event. However, future PC designs are expected to use CPUs which double the number of processing cores at the same rate as the cost of memory falls by a factor of two. This effectively means the amount of memory per processing core will remain constant. This is a major challenge for LHC processing frameworks since the LHC is expected to deliver more complex events (e.g. greater pileup events) in the coming years while the LHC experiment's frameworks are already memory constrained. Therefore in the not so distant future we may need to be able to efficiently use multiple cores to process one event. In this presentation we will discuss a design for an HEP processing framework which can allow very fine grained parallelization within one event as well as supporting processing multiple events simultaneously while minimizing the memory footprint of the job. The design is built around the libdispatch framework created by Apple Inc. (a port for Linux is available) whose central concept is the use of task queues. This design also accommodates the reality that not all code will be thread safe and therefore allows one to easily mark modules or sub parts of modules as being thread unsafe. In addition, the design efficiently handles the requirement that events in one run must all be processed before starting to process events from a different run. After explaining the design we will provide measurements from simulating different processing scenarios where the processing times used for the simulation are drawn from processing times measured from actual CMS event processing.

022028
The following article is Open access

The Mu2e experiment at Fermilab is in the midst of its R&D and approval processes. To aid and inform this process, a small team has developed an end-to-end Geant4-based simulation package and has developed reconstruction code that is already at the stage of an advanced prototype. Having these tools available at an early stage allows design options and tradeoffs to be studied using high level physics quantities. A key to the success of this effort has been, as much as possible, to acquire software and customize it, rather than to build it in-house.

022029
The following article is Open access

, , and

The Compressed Baryonic Matter (CBM) experiment at the future FAIR facility at Darmstadt will measure dileptons emitted from the hot and dense phase in heavy-ion collisions. In case of an electron measurement, a high purity of identified electrons is required in order to suppress the background. Electron identification in CBM will be performed by a Ring Imaging Cherenkov (RICH) detector and Transition Radiation Detectors (TRD). In this contribution, algorithms which were developed for the electron reconstruction and identification in RICH and TRD detectors are presented. A fast RICH ring recognition algorithm based on the Hough Transform method was implemented. An ellipse fitting algorithm was implemented since most of the RICH rings have elliptic shapes. An efficient algorithm based on the Artificial Neural Network is implemented for electron identification in RICH. In TRD track reconstruction algorithm which is based on track following and Kalman Filter methods was implemented. Several algorithms for electron identification in TRD were developed and investigated. The best-performed algorithm is based on the special transformation of energy losses measured in TRD and usage of the Boosted Decision Tree (BDT) method as classifier. Results and comparison of different methods of electron identification and pion suppression are presented.

022030
The following article is Open access

, , and

The Silicon Vertex Detector (SVD) of the Belle II experiment is a newly developed device with four measurement layers. Track finding in the SVD will be done both in conjunction with the Central Drift Chamber and in stand-alone mode. The reconstruction of very-low-momentum tracks in stand-alone mode is a big challenge, especially in view of the low redundancy and the large expected background. We describe an approach for track finding in this domain, where a cellular automaton and a Kalman filter is combined with a Hopfield network which finds an optimal subset of non-overlapping tracks. We present results on simulated data and evaluate them in terms of efficiency and purity.

022031
The following article is Open access

Monte Carlo simulations of physics events, including detailed simulation of the detector response, are indispensable for every analysis of high-energy physics experiments. As these simulated data sets must be both large and precise, their production is a CPU-intensive task. Increasing the recorded luminosity at the Large Hadron Collider (LHC), and hence the amount of data to be analyzed, leads to a steadily rising demand for simulated MC statistics for systematics and background studies. These huge MC requirements for more refined physics analyses can only be met through the implementation of fast simulation strategies which enable faster production of large MC samples. ATLAS has developed full and fast detector simulation techniques to achieve this goal within the computing limits of the collaboration. We present Atlfast-II which uses the FastCaloSim package in the calorimeter and reduces the simulation time by one order of magnitude by means of parameterizations of the longitudinal and lateral energy profile, and Atlfast-IIF with the fast track simulation engine Fatras, which achieves a further simulation time reduction of one order of magnitude in the Inner Detector and Muon System. Finally we present the new Integrated Simulation Framework (ISF) which is based on the requirement to allow to run all simulation types in the same job, even within the same sub-detector, for different particles. The ISF is designed to be extensible to new simulation types as well as the application of parallel computing techniques in the future. It can be easily configured by the user to find an optimal balance between precision and execution time, according to the specific physics requirements for their analysis.

022032
The following article is Open access

A general perspective of the CHEP 2012 Event Processing track is given, and some predictions for the future are offered.

022033
The following article is Open access

and

A project to allow long term access and physics analysis of ZEUS data (ZEUS data preservation) has been established in collaboration with the DESY-IT group. In the ZEUS approach the analysis model is based on the Common Ntuple project, under development since 2006. The real data and all presently available Monte Carlo samples are being preserved in a flat ROOT ntuple format. There is ongoing work to provide the ability to simulate new, additional Monte Carlo samples also in the future. The validation framework of such a scheme using virtualisation techniques is being explored. The goal is to validate the frozen ZEUS software against future changes in hardware and operating system. A cooperation between ZEUS, DESY-IT and the library was established for document digitisation and long-term preservation of collaboration web pages. Part of the ZEUS internal documentation has already been stored within the HEP documentation system INSPIRE. Existing digital documentation, needed to perform physics analysis also in the future, is being centralised and completed.

022034
The following article is Open access

and

Pandora is a robust and efficient framework for developing and running pattern-recognition algorithms. It was designed to perform particle flow calorimetry, which requires many complex pattern-recognition techniques to reconstruct the paths of individual particles through fine granularity detectors. The Pandora C++ software development kit (SDK) consists of a single library and a number of carefully designed application programming interfaces (APIs). A client application can use the Pandora APIs to pass details of tracks and hits/cells to the Pandora framework, which then creates and manages named lists of self-describing objects. These objects can be accessed by Pandora algorithms, which perform the pattern-recognition reconstruction. Development with the Pandora SDK promotes the creation of small, re-usable algorithms containing just the kernel of a specific operation. The algorithms are configured via XML and can be nested to perform complex reconstruction tasks. As the algorithms only access the Pandora objects in a controlled manner, via the APIs, the framework can perform most book-keeping and memory-management operations. The Pandora SDK has been fully exploited in the implementation of PandoraPFA, which uses over 60 algorithms to provide the state of the art in particle flow calorimetry for ILC and CLIC.

022035
The following article is Open access

and

The reconstruction and simulation of collision events is a major task in modern HEP experiments involving several ten thousands of standard CPUs. On the other hand the graphics processors (GPUs) have become much more powerful and are by far outperforming the standard CPUs in terms of floating point operations due to their massive parallel approach. The usage of these GPUs could therefore significantly reduce the overall reconstruction time per event or allow for the usage of more sophisticated algorithms.

In this paper the track finding in the ATLAS experiment will be used as an example on how the GPUs can be used in this context: the implementation on the GPU requires a change in the algorithmic flow to allow the code to work in the rather limited environment on the GPU in terms of memory, cache, and transfer speed from and to the GPU and to make use of the massive parallel computation. Both, the specific implementation of parts of the ATLAS track reconstruction chain and the performance improvements obtained will be discussed.

022036
The following article is Open access

, , , , , , , and

To understand in detail cosmic magnetic fields and sources of Ultra-High Energy Cosmic Rays (UHECRs) we have developed a Monte Carlo simulation for galactic and extragalactic propagation. In our approach we identify three different propagation regimes for UHECRs, the Milky Way, the local universe out to 110 Mpc, and the distant universe. For deflections caused by the galactic magnetic field a lensing technique based on matrices is applied which are created from backtracking of antiparticles through galactic field models. Propagation in the local universe uses forward tracking through structured magnetic fields extracted from simulations of the large scale structure of the universe. UHECRs from distant sources are simulated using parameterized models. In this contribution we present the combination of all three simulation techniques by means of probability maps. The combined probability maps are used to generate a large number of UHECRs, and to create distributions from approximately realistic universe scenarios. Comparisons with physics analyses of UHECR measurements enable the development of new analysis techniques and help to constrain parameters of the underlying physics models like the source density and the magnetic field strength in the universe.

022037
The following article is Open access

and

Track fitting in the new inner tracker of the Belle II experiment uses the GENFIT package. In the latter both a standard Kalman filter and a robust extension, the deterministic annealing filter (DAF), are implemented. This contribution presents the results of a simulation experiment which examines the performance of the DAF in the inner tracker, in terms of outlier detection ability and of the impact of different kinds of background on the quality of the fitted tracks.

022038
The following article is Open access

, , , , , , , , , et al

Ongoing investigations for the improvement of Geant4 accuracy and computational performance resulting by refactoring and reengineering parts of the code are discussed. Issues in refactoring that are specific to the domain of physics simulation are identified and their impact is elucidated. Preliminary quantitative results are reported.

022039
The following article is Open access

, , , , , , , and

Recent efforts for the improvement of the accuracy of physics data libraries used in particle transport are summarized. Results are reported about a large scale validation analysis of atomic parameters used by major Monte Carlo systems (Geant4, EGS, MCNP, Penelope etc.); their contribution to the accuracy of simulation observables is documented. The results of this study motivated the development of a new atomic data management software package, which optimizes the provision of state-of-the-art atomic parameters to physics models. The effect of atomic parameters on the simulation of radioactive decay is illustrated. Ideas and methods to deal with physics models applicable to different energy ranges in the production of data libraries, rather than at runtime, are discussed.

022040
The following article is Open access

and

The ATLAS Pixel detector is currently measuring particle positions at 8 TeV proton-proton collisions at the LHC. In the dense environment of jets with high transverse momenta produced in these events the separation between particles becomes small, such that their respective charge deposited are reconstructed as single clusters. A Neural Network (NN)-based clustering algorithm has been developed to identify such merged clusters. By using all cluster information, the NN is ideal to estimate the particle multiplicity and for each of the estimated number of particles, the position with its uncertainty. As a result of the NN reconstruction, the number of hits shared by several tracks is strongly reduced. Furthermore, the impact parameter improves by about 15% which indicates boosted prospects for physics analysis.

022041
The following article is Open access

, , , , , and

Presented in this contribution are methods currently developed and used by the ATLAS collaboration to measure the performance of the primary vertex reconstruction algorithms. With the increasing instantaneous luminosity at the LHC, many proton-proton collisions occur simultaneously in one bunch crossing. The correct identification of the primary vertex from a hard scattering process and the knowledge of the number of additional pile-up interactions is crucial for many physics analyses. Under high pile-up conditions, additional effects like splitting one vertex into many or reconstructing several interactions as one also become sizable effects. The mathematical methods, their software implementation, and studies presented in this contribution are methods currently developed and used by the ATLAS collaboration to measure the performance of the primary vertex reconstruction algorithms. Statistical methods based on data and Monte Carlo simulation are both used to disentangle and understand the different contributions.

022042
The following article is Open access

A pattern recognition software for a continuously operating high-rate Time Projection Chamber with Gas Electron Multiplier amplification (GEM-TPC) has been designed and tested. Space points are delivered by a track-independent clustering algorithm. A true 3-dimensional track follower combines them to helical tracks, without constraints on the vertex position. Fast helix fits, based on a conformal mapping on the Riemann sphere, are the basis for deciding whether points belong to one track. To assess the performance of the algorithm in a high-rate environment, pp interactions at a rate of 2 × 107 s−1, the maximum rate foreseen for PANDA, have been simulated. The pattern recognition is capable of finding different kinds of track topologies with high efficiency and provides excellent seed values for track fitting or online event selection. The feasibility of event deconvolution has been demonstrated: Different techniques to retain the tracks from an event with known time from other tracks in the TPC are presented in this paper.

022043
The following article is Open access

In 2011 the LHC provided excellent data, the integrated luminosity of about 5 fb−1 was more than what was expected. The price for this huge data set is the in- and out-of-time pileup, additional soft collisions overlaid on top of the interesting collision. The reconstruction software is very sensitive to these additional particles in the event, as the reconstruction time increases due to increased combinatorics. During the running of the experiment in 2011, several successful changes to the software were made that sped up the reconstruction. Pileup has different effects on the various detector technologies used in ATLAS and a general recipe for all subdetectors is not applicable.

022044
The following article is Open access

and

The CMS tracking code is organized in several levels, known as 'iterative steps', each optimized to reconstruct a class of particle trajectories, as the ones of particles originating from the primary vertex or displaced tracks from particles resulting from secondary vertices. Each iterative step consists of seeding, pattern recognition and fitting by a Kalman filter, and a final filtering and cleaning. Each subsequent step works on hits not yet associated to a reconstructed particle trajectory. The CMS tracking code underwent a major upgrade deployed in two phases. It was needed to make the reconstruction computing load compatible with the increasing instantaneous luminosity of LHC, resulting in a large number of primary vertices and tracks per bunch crossing. The improvements are described. Among the others, the iterative steps have been reorganized and optimized and an iterative step specialized for the reconstruction of photon conversion has been added. The overall impact on reconstruction performances is discussed and the prospects for future applications are given.

022045
The following article is Open access

, , , , , , and

NA61/SHINE (SHINE = SPS Heavy Ion and Neutrino Experiment) is an experiment at the CERN SPS using the upgraded NA49 hadron spectrometer. Among its physics goals are precise hadron production measurements for improving calculations of the neutrino beam flux in the T2K neutrino oscillation experiment as well as for more reliable simulations of cosmic-ray air showers. Moreover, p+p, p+Pb and nucleus+nucleus collisions will be studied extensively to allow for a study of properties of the onset of deconfinement and search for the critical point of strongly interacting matter.

Currently NA61/SHINE uses the old NA49 software framework for reconstruction, simulation and data analysis. The core of this legacy framework was developed in the early 1990s. It is written in different programming and scripting languages (C, pgi-Fortran, shell) and provides several concurrent data formats for the event data model, which includes also obsolete parts.

In this contribution we will introduce the new software framework, called Shine, that is written in C++ and designed to comprise three principal parts: a collection of processing modules which can be assembled and sequenced by the user via XML files, an event data model which contains all simulation and reconstruction information based on STL and ROOT streaming, and a detector description which provides data on the configuration and state of the experiment. To assure a quick migration to the Shine framework, wrappers were introduced that allow to run legacy code parts as modules in the new framework and we will present first results on the cross validation of the two frameworks.

022046
The following article is Open access

, , and

GPGPU computing offers extraordinary increases in pure processing power for parallelizable applications. In IceCube we use GPUs for ray-tracing of cherenkov photons in the Antarctic ice as part of detector simulation. We report on how we implemented the mixed simulation production chain to include the processing on the GPGPU cluster for the IceCube Monte-Carlo production. We also present ideas to include GPGPU accelerated reconstructions into the IceCube data processing.

022047
The following article is Open access

and

The final step in a HEP data-processing chain is usually to reduce the data to a 'tuple' form which can be efficiently read by interactive analysis tools such as ROOT. Often, this is implemented independently by each group analyzing the data, leading to duplicated effort and needless divergence in the format of the reduced data. ATLAS has implemented a common toolkit for performing this processing step. By using tools from this package, physics analysis groups can produce tuples customized for a particular analysis but which are still consistent in format and vocabulary with those produced by other physics groups. The package is designed so that almost all the code is independent of the specific form used to store the tuple. The code that does depend on this is grouped into a set of small backend packages. While the ROOT backend is the most used, backends also exist for HDF5 and for specialized databases. By now, the majority of ATLAS analyses rely on this package, and it is an important contributor to the ability of ATLAS to rapidly analyze physics data.

022048
The following article is Open access

The PANDA experiment will study the collisions of beams of anti-protons, with momenta ranging from 2-15 GeV/c, with fixed proton and nuclear targets in the charm energy range, and will be built at the FAIR facility. In preparation for the experiment, the PandaRoot software framework is under development for detector simulation, reconstruction and data analysis, running on an Alien2-based grid. The basic features are handled by the FairRoot framework, based on ROOT and Virtual Monte Carlo, while the PANDA detector specifics and reconstruction code are implemented inside PandaRoot. The realization of Technical Design Reports for the tracking detectors has pushed the finalization of the tracking reconstruction code, which is complete for the Target Spectrometer, and of the analysis tools. Particle Identification algorithms are currently implemented using Bayesian approach and compared to Multivariate Analysis methods. Moreover, the PANDA data acquisition foresees a triggerless operation in which events are not defined by a hardware 1st level trigger decision, but all the signals are stored with time stamps requiring a deconvolution by the software. This has led to a redesign of the software from an event basis to a time-ordered structure. In this contribution, the reconstruction capabilities of the Panda spectrometer will be reported, focusing on the performances of the tracking system and the results for the analysis of physics benchmark channels, as well as the new (and challenging) concept of time-based simulation and its implementation.

022049
The following article is Open access

, , , and

The ATLAS experiment at the LHC collider recorded more than 5 fb−1 data of pp collisions at a centre-of-mass energy of 7 TeV during 2011. The recorded data are promptly reconstructed in two steps at a large computing farm at CERN to provide fast access to high quality data for physics analysis. In the first step, a subset of the data, corresponding to the express stream and having 10Hz of events, is processed in parallel with data taking. Data quality, detector calibration constants, and the beam spot position are determined using the reconstructed data within 48 hours. In the second step all recorded data are processed with the updated parameters. The LHC significantly increased the instantaneous luminosity and the number of interactions per bunch crossing in 2011; the data recording rate by ATLAS exceeds 400 Hz. To cope with these challenges the performance and reliability of the ATLAS reconstruction software have been improved. In this paper we describe how the prompt data reconstruction system quickly and stably provides high quality data to analysers.

022050
The following article is Open access

Modern experiments in hadron and particle physics are searching for more and more rare decays which have to be extracted out of a huge background of particles. To achieve this goal a very high precision of the experiments is required which has to be reached also from the simulation software. Therefore a very detailed description of the hardware of the experiment is needed including also tiny details.

To help the developers of the simulation code to achieve the required level of detail in geometry a semi-automatic tool was developed which is able to convert geometry descriptions coming from CAD programs into ROOT geometries which can be used directly in any ROOT based simulation software.

022051
The following article is Open access

The time calibration for barrel TOF system of BESIII is studied in this paper. The time resolution for single layer and double layer have been achieved about 97 ps and 78 ps for electrons in Bhabha events respectively. The pulse height correction using electronic scan curve and the predicted time calculated using Kalman filter method are introduced. This paper also describes the analysis of correlation of measured time.

022052
The following article is Open access

, , , , , , , and

Fireworks, the event-display program of CMS, was extended with an advanced geometry visualization package. ROOT's TGeo geometry is used as internal representation, shared among several geometry views. Each view is represented by a GUI list-tree widget, implemented as a flat vector to allow for fast searching, selection, and filtering by material type, node name, and shape type. Display of logical and physical volumes is supported. Color, transparency, and visibility flags can be modified for each node or for a selection of nodes. Further operations, like opening of a new view or changing of the root node, can be performed via a context menu. Node selection and graphical properties determined by the list-tree view can be visualized in any 3D graphics view of Fireworks. As each 3D view can display any number of geometry views, a user is free to combine different geometry-view selections within the same 3D view. Node-selection by proximity to a given point is possible. A visual clipping box can be set for each geometry view to limit geometry drawing into a specified region. Visualization of geometric overlaps, as detected by TGeo, is also supported. The geometry visualization package is used for detailed inspection and display of simulation geometry with or without the event data. It also serves as a tool for geometry debugging and inspection, facilitating development of geometries for CMS detector upgrades and for SLHC.

022053
The following article is Open access

, , , , , , , , , et al

The SuperB asymmetric e+e collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab−1 and a peak luminosity of 1036 cm−2 s−1.

The SuperB Computing group is working on developing a simulation production framework capable to satisfy the experiment needs. It provides access to distributed resources in order to support both the detector design definition and its performance evaluation studies. During last year the framework has evolved from the point of view of job workflow, Grid services interfaces and technologies adoption. A complete code refactoring and sub-component language porting now permits the framework to sustain distributed production involving resources from two continents and Grid Flavors.

In this paper we will report a complete description of the production system status of the art, its evolution and its integration with Grid services; in particular, we will focus on the utilization of new Grid component features as in LB and WMS version 3. Results from the last official SuperB production cycle will be reported.

022054
The following article is Open access

, , , , and

A critical component of any multicore/manycore application architecture is the handling of input and output. Even in the simplest of models, design decisions interact both in obvious and in subtle ways with persistence strategies. When multiple workers handle I/O independently using distinct instances of a serial I/O framework, for example, it may happen that because of the way data from consecutive events are compressed together, there may be serious inefficiencies, with workers redundantly reading the same buffers, or multiple instances thereof. With shared reader strategies, caching and buffer management by the persistence infrastructure and by the control framework may have decisive performance implications for a variety of design choices. Providing the next event may seem straightforward when all event data are contiguously stored in a block, but there may be performance penalties to such strategies when only a subset of a given event's data are needed; conversely, when event data are partitioned by type in persistent storage, providing the next event becomes more complicated, requiring marshalling of data from many I/O buffers. Output strategies pose similarly subtle problems, with complications that may lead to significant serialization and the possibility of serial bottlenecks, either during writing or in post-processing, e.g., during data stream merging. In this paper we describe the I/O components of AthenaMP, the multicore implementation of the ATLAS control framework, and the considerations that have led to the current design, with attention to how these I/O components interact with ATLAS persistent data organization and infrastructure.

022055
The following article is Open access

, , and

A feasibility study into the acceleration of multivariate analysis techniques using Graphics Processing Units (GPUs) will be presented. The MLP-based Artificial Neural Network method contained in the TMVA framework has been chosen as a focus for investigation. It was found that the network training time on a GPU was lower than for CPU execution as the complexity of the network was increased. In addition, multiple neural networks can be trained simultaneously on a GPU within the same time taken for single network training on a CPU. This could be potentially leveraged to provide a qualitative performance gain in data classification.

022056
The following article is Open access

With an average number of up to 30 pile-up interactions per bunch crossing in 2012 data, the ATLAS Inner Detector at the LHC is performing in an environment beyond its design specifications. This has a significant impact on event reconstruction such as additional demands on CPU time and disk space as well as an increased probability to reconstruct fake tracks. Therefore the track and vertex reconstruction performance has been optimised in several high pile-up scenarios (see [1]). Studies in data and simulation are presented which demonstrate that the performance of the Inner Detector track and vertex reconstruction is robust even in a high pile-up environment.

022057
The following article is Open access

Modern HEP analysis requires multiple passes over large datasets. For example, one has to first reweight the jet energy spectrum in Monte Carlo to match data before you can make plots of any other jet related variable. This requires a pass over the Monte Carlo and the Data to derive the reweighting, and then another pass over the Monte Carlo to plot the variables you are really interested in. With most modern ROOT based tools this requires separate analysis loops for each pass, and script files to glue to the two analysis loops together. A prototype framework has been developed that uses the functional and declarative features of C# and LINQ to specify the analysis. The framework uses language tools to convert the analysis into C++ and runs ROOT or PROOF as a backend to get the results. This gives the analyzer the full power of an object-oriented programming language to put together the analysis and at the same time the speed of C++ for the analysis loop. The tool allows one to incorporate C++ algorithms written for ROOT by others. The code is mature enough to have been used in ATLAS analyses. The package is open source and available on the open source site CodePlex.

022058
The following article is Open access

, and

Increasingly detailed descriptions of complex detector geometries are required for the simulation and analysis of today's high-energy and nuclear physics experiments. As new tools for the representation of geometry models become available during the course of an experiment, a fundamental challenge arises: how best to migrate from legacy geometry codes developed over many runs to the new technologies, such as the ROOT/TGeo [1] framework, without losing touch with years of development, tuning and validation. One approach, which has been discussed within the community for a number of years, is to represent the geometry model in a higher-level language independent of the concrete implementation of the geometry. The STAR experiment has used this approach to successfully migrate its legacy GEANT 3-era geometry to an Abstract geometry Modelling Language (AgML), which allows us to create both native GEANT 3 and ROOT/TGeo implementations. The language is supported by parsers and a C++ class library which enables the automated conversion of the original source code to AgML, supports export back to the original AgSTAR[5] representation, and creates the concrete ROOT/TGeo geometry implementation used by our track reconstruction software. In this paper we present our approach, design and experience and will demonstrate physical consistency between the original AgSTAR and new AgML geometry representations.

022059
The following article is Open access

, , , , , and

BESIII experiment adopts a drift chamber as the central tracking detector. Big distortion of the chamber due to mechanical imperfection causes bad momentum resolution. Software alignment is the only possible strategy to estimate the displacements of sub-endplates of the chamber. We have developed an alignment software in the framework of the BESIII Offine Software System. Cosmic-ray data are used in preliminary alignment. The alignment method is introduced in the paper. The results of the alignment are also presented.

022060
The following article is Open access

In the past year several improvements in Geant4 hadronic physics code have been made, both for HEP and nuclear physics applications. We discuss the implications of these changes for physics simulation performance and user code. In this context several of the most-used codes will be covered briefly. These include the Fritiof (FTF) parton string model which has been extended to include antinucleon and antinucleus interactions with nuclei, the Bertinistyle cascade with its improved CPU performance and extension to include photon interactions, and the precompound and deexcitation models. We have recently released new models and databases for low energy neutrons, and the radioactive decay process has been improved with the addition of forbidden beta decays and better gamma spectra following internal conversion.

As new and improved models become available, the number of tests and comparisons to data has increased. One of these is a validation of the parton string models against data from the MIPP experiment, which covers the largely untested range of 50 to 100 GeV. At the other extreme, a new stopped hadron validation will cover pions, kaons and antiprotons. These, and the ongoing simplified calorimeter studies, will be discussed briefly. We also discuss the increasing number of regularly performed validations, the demands they place on both software and users, and the automated validation system being developed to address them.

022061
The following article is Open access

, , , , , , , and

The Daya Bay Reactor Neutrino Experiment utilizes an RPC detector system to detect cosmic-ray muons for offline suppression of muon-induced backgrounds. This proceeding paper introduces the structure of the offline software of the RPC detector system, including simulation, detector calibration, and event reconstruction. In addition, preliminary analysis results based on the offline software are reported.