The data acquisition system of the neutron time-of-flight facility n_TOF ...

7 downloads 1514 Views 325KB Size Report
on a dedicated web server. The user interface for the control of the data. acquisition system is the program RunControl. (Fig. 6) which runs on a PC in the control ...
ARTICLE IN PRESS

Nuclear Instruments and Methods in Physics Research A 538 (2005) 692–702 www.elsevier.com/locate/nima

The data acquisition system of the neutron time-of-flight facility n_TOF at CERN U. Abbondannoa, G. Aertsb, F. A´lvarezc, H. A´lvarezd, S. Andriamonjeb, J. Andrzejewskie, G. Badurekf, P. Baumanng, F. Becˇva´rˇ h, J. Benlliured, E. Berthomieuxb, B. Betevi,1, F. Calvin˜oj, D. Cano-Ottc, R. Capotek, P. Cenninil, V. Chepelm, E. Chiaveril, N. Colonnan, G. Cortesj, D. Cortinad, A. Coutureo, J. Coxo, S. Dababnehp, S. Davidq, R. Dolfinir, C. Domingo-Pardos, I. Durand, M. Embid-Segurac, L. Ferrantq, A. Ferraril, R. Ferreira-Marquesm, H. Frais-Koelblt, W. Furmanu, I. Goncalvesv, E. Gonzalez-Romeroc, A. Goverdovskiw, F. Gramegnax, E. Griesmayert, F. Gunsingb, B. Haasy, R. Haightz, M. Heilp, A. Herrera-Martinezl, S. Isaevq, E. Jerichaf, Y. Kadil, F. Ka¨ppelerp, M. Kervenog, V. Ketlerovw, P.E. Koehleraa, V. Konovalovu, M. Krticˇkah, H. Leebf, A. Lindotem, M.I. Lopesm, M. Lozanok, S. Lukicg, J. Marganiece, S. Marronen, J. Martinez-Valab, P. Mastinux, A. Mengonil, P.M. Milazzoa, A. Molina-Coballesk, C. Moreaua, M. Mosconip, F. Nevesm, H. Oberhummerf, S. O’Brieno, J. Pancinb, T. Papaevangeloul, C. Paradelad, A. Pavlikac, P. Pavlopoulosad, J.M. Perladoab, L. Perrotb, V. Peskovae, R. Plagp,, A. Plompenaf, A. Plukisb, A. Pochj, A. Policarpom, C. Pretelj, J.M. Quesadak, W. Rappp, T. Rauscherag, R. Reifarthz, M. Rosettiah, C. Rubbiar, G. Rudolfg, P. Rullhusenaf, J. Salgadov, E. Scha¨ferl,2, J.C. Soaresv, C. Stephanq, G. Taglienten, J.L. Tains, L. Tassan-Gotq, L.M.N. Tavorav, R. Terlizzin, G. Vanniniai, P. Vazv, A. Venturaah, D. Villamarin-Fernandezc, M. Vincente-Vincentec, V. Vlachoudisl, F. Vossp, H. Wendlerl, M. Wieschero, K. Wisshakp a

Istituto Nazionale di Fisica Nucleare-Sezione di Trieste, Italy CEA/Saclay—DSM/DAPNIA/SPhN, Gif-sur-Yvette, France

b

Corresponding author. Tel.: +49 07247 823984; fax: +49 07247 824075.

E-mail address: [email protected] (R. Plag). Presently at IPP, ETH-Zu¨rich, CH-8092 Zu¨rich, Switzerland. 2 Now at Acqiris SA, 18, chemin des Aulx, CH-1228 Plan-les-Ouates, Switzerland. 1

0168-9002/$ - see front matter r 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.nima.2004.09.002

ARTICLE IN PRESS U. Abbondanno et al. / Nuclear Instruments and Methods in Physics Research A 538 (2005) 692–702

693

c

Centro de Investigaciones Energe´ticas Medioambientales y Tecnolo´gicas, Madrid, Spain d Universidade de Santiago de Compostela, Spain e University of Lodz, Lodz, Poland f ¨ Atominstitut der Osterreichischen Universita¨ten, Technische Universita¨t Wien, Austria g Centre National de la Recherche Scientifique/IN2P3 - IreS, Strasbourg, France h Charles University, Prague, Czech Republic i Central Laboratory of Mechatronics and Instrumentation, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria j Universitat Polite`cnica de Catalunya, Barcelona, Spain k Universidad de Sevilla, Spain l CERN, Geneva, Switzerland m Laboratorio de Instrumentacao e Phisica Experimental de Particulas, Coimbra, Portugal n Istituto Nazionale di Fisica Nucleare-Sezione di Bari, Italy o University of Notre Dame, USA p Forschungszentrum Karlsruhe GmbH, Institut fu¨r Kernphysik, Hermann-von-Helmholtz-Platz 1, Eggenstein, Leopaldhafen 76344, Germany q Centre National de la Recherche Scientifique/IN2P3 - IPN, Orsay, France r Universita` degli Studi Pavia, Italy s Instituto de Fı´sica Corpuscular, CSIC-Univ. Valencia, Spain t Fachhochschule Wiener Neustadt, Wien, Austria u Joint Institute for Nuclear Research, Frank Laboratory of Neutron Physics, Dubna, Russia v Instituto Tecnolo´gico e Nuclear, Lisboa, Portugal w Institute of Physics and Power Engineering, Obninsk, Russia x Istituto Nazionale di Fisica Nucleare, Laboratori Nazionali di Legnaro, Italy y Centre National de la Recherche Scientifique/IN2P3 - CENBG, Bordeaux, France z Los Alamos National Laboratory, New Mexico, USA aa Oak Ridge National Laboratory, Physics Division, Oak Ridge, USA ab Universidad Polite´cnica de Madrid, Spain ac Institut fu¨r Isotopenforschung und Kernphysik, Universita¨t Wien, Austria ad Poˆle Universitaire Le`onard de Vinci, Paris La De`fense, France ae Kungliga Tekniska Hogskolan, Physics Department, Stockholm, Sweden af CEC-JRC-IRMM, Geel, Belgium ag University of Basel, Switzerland ah ENEA, Bologna, Italy ai Dipartimento di Fisica and INFN, Bologna, Italy Received 11 May 2004; received in revised form 13 August 2004; accepted 6 September 2004 The n_TOF Collaboration Available online 5 October 2004

Abstract The n_TOF facility at CERN has been designed for the measurement of neutron capture, fission and (n; n) crosssections with high accuracy. This requires a flexible and—due to the high instantaneous neutron flux—almost dead time free data acquisition system. A scalable and versatile data solution has been designed based on 8-bit flash-ADCs with sampling rates up to 2 GHz and 8 Mbyte memory buffer. The software is written in C and C þ þ and is running on PCs equipped with RedHat Linux. r 2004 Elsevier B.V. All rights reserved. PACS: 25.40.Lw; 25.40.Ny; 29.50.+v; 29.85.+c Keywords: Flash-ADC; Data acquisition; Neutron time-of-flight spectroscopy; Pulse shape analysis

ARTICLE IN PRESS 694

U. Abbondanno et al. / Nuclear Instruments and Methods in Physics Research A 538 (2005) 692–702

1. Introduction The neutron time-of-flight (TOF) facility n_TOF at CERN is known for its outstanding TOF resolution, high instantaneous neutron flux and low overall backgrounds [1]. Neutron production at n_TOF is accomplished by spallation reactions in a massive Pb target. A solid block of Pb 80  80  60 cm3 is hit by a pulsed 20 GeV=c proton beam provided by CERN’s proton synchrotron (PS) with an intensity of typically 4–7  1012 protons per pulse and a width of approximately 7 ns. The pulses are separated in time by at least 1.2 s. The cooling of the Pb target was designed to accept a maximum of five proton pulses per super cycle (16.8 s). The neutrons are moderated by the cooling water surrounding the Pb target and exhibit an energy spectrum from thermal up to the GeV region. Each proton produces a few hundred neutrons which may reach the experimental area after traversing two collimators at 137 and 176 m from the spallation target. At a nominal flight path of 185 m, a neutron beam with a diameter of a few cm is available for the measurements. Several experimental setups have to be served by the data acquisition system (DAQ) described in this paper: a pair of two C6D6 detectors for capture measurements, a set of Parallel Plate Avalanche Chambers (PPACs) with up to 50 channels for fission measurements, and a MicroMegas detector for measuring the profile of the neutron beam. In addition, a neutron flux monitor consisting of a 6Li foil surrounded by four silicon detectors is used during the capture measurements, and a BaF2 crystal ball with 40 channels is currently in preparation for capture measurements starting in 2004. During all measurements, the pick-up signal of the proton pulse is also recorded as a time reference and for estimating the beam intensity. The experimental conditions at n_TOF allow one to perform high accuracy measurements of neutron capture (n,g), neutron-induced fission (n,f) and neutron multiplication (n,n) crosssections over a broad energy range. This variety of cross-section measurements and experimental setups require a versatile, flexible and scalable

DAQ. According to these requirements, the n_TOF DAQ has been designed on the basis of high performance flash-ADCs because of various reasons:



 





The recording of all detector signals allows a rigorous assessment of systematic uncertainties associated with the detector’s performance during the complete data taking period. An almost negligible dead time of p15 ns can be achieved. Most of the front-end electronics (e.g. preamplifiers, time and energy amplifiers, ADCs . . .) is no longer necessary since it can be completely simulated by software during the processing of the flash-ADC data (raw data). The relevant parameters of the electronic signals such as time information, amplitude and area are calculated by means of dedicated pulse shape analysis algorithms for every detector type. Distortions in the raw data, such as pileup, baseline shifts and noise among others, are efficiently corrected during data analysis.

The difficulties of such a system are the huge amount of accumulated data, thus requiring large storage capabilities, high data transfer rates, and sufficient computing power for the analysis of the flash-ADC data. In principle, the idea of using flash-ADCs is not new. However, most of the flash-ADCs used for data acquisition have a sampling rate of not much more than 100 MHz [2,3], a factor of 10 slower than the system described here. At these lower data rates (which are much easier to handle) the acquisition of fast signals, e.g., from C6D6 and BaF2 scintillators is severely limited. The only other example of a comparable fast flash-ADC system is reported by Reifarth et al. [4]. In the following sections the features, structure, hardware, and specific data processing of the new n_TOF data acquisition system are described.

2. Data acquisition system and data flow The n_TOF DAQ consists, at present, of 44 flash-ADC (FADC) channels with 8-bit

ARTICLE IN PRESS U. Abbondanno et al. / Nuclear Instruments and Methods in Physics Research A 538 (2005) 692–702

which the neutrons, produced at the spallation target and traveling along the n_TOF flight path, have enough time to reach the experimental area. The accessible neutron energy range from 0.7 eV to relativistic energies (typically a few hundred MeV) corresponds to maximum and minimum flight times of 16 and 620 ns, respectively. During the 16 ms flight time, each detector output is digitised by an 8-bit flash-ADC at a sampling rate of typically 500 MHz. The entire detector response is thus recorded in 8 Mbyte long FADC memory buffers. To reduce this large amount of data, a compression algorithm is executed on the fly by the readout-PCs. Presently, the data reduction is a fast zero suppression algorithm that selects only pieces of data containing true detector signals with amplitudes above a threshold set on the FADC. These pieces are extended by a pre-selected

resolution, sampling rates up to 2 GHz and 8 or 16 Mbyte memory. A diagram of the complete data flow from the detectors to the tapes as well as the flash-ADC data processing chain is shown in Fig. 1. A detailed description of the front-end electronics and software is found in Section 4. Each detector in the experimental setup is coupled to a corresponding FADC channel. Groups of 4 or 8 FADCs are physically plugged to readout PCs for improving the transfer rate. The raw FADC data files are sent via GigaBit ethernet to a temporary disk pool located close to the counting station. Once the files are closed, they are transferred via GigaBit ethernet to a second disk pool from which they are finally migrated to tape. The pulsed structure of the CERN PS accelerator allows the DAQ to be triggered by the impact of the proton beam on the spallation target. The trigger opens a 16 ms time window in

Detectors

Beam information and monitoring Setup Stream 0

Detectors

flash ADC channels DAQ CPU (zero suppression)

flash ADC channels DAQ CPU (zero suppression)

80 MB/s

80 MB/s

Stream 1

Stream N 15 MB/s

Disk Pool and CPU 15 MB/s

Tape Pool

Tape Pool

15 MB/s

Analysis CPUs: -Event building -Physics analysis

PU

30 MB/s /s/C

30 MB/s Disk to Disk copy

MB

30 MB/s

Disk Repository for DSTs (all Streams)

Disk Pool Disk Pool for Raw for Raw Data Data processing

Tape Pool

15

CASTOR

Disk Pool for INFO Data

Event Display of the zero suppressed Raw Data

DSTs from other streams

Stream 1 128 MB/s

15 MB/s/CPU

Stream 0 128 MB/s

695

Stream 1 CERN LXBATCH CPUs: - Pulse Shape Analysis of individual detector signals - DST production

Fig. 1. Schematic data flow from detectors to tape to data processing.

ARTICLE IN PRESS U. Abbondanno et al. / Nuclear Instruments and Methods in Physics Research A 538 (2005) 692–702

number of samples at the beginning and the end as indicated in Fig. 2 to facilitate the pulse shape analysis and stored together with the exact time of the first threshold crossing point. The speed of the whole operation is limited mainly by the number of readout-PCs and the readout time, which corresponds to the 80–100 Mbyte/s transfer rate limit of the PCI bus in the readout-PCs. The zero suppression results in data reduction factors between 2 and 1000, depending on the detector type and experimental conditions, as indicated in Fig. 3. Better but more time consuming compression algorithms are being prepared for the more demanding conditions of future measurements. The set of signals corresponding to one detector and one proton pulse is formatted in a data buffer, and the data buffers of different detectors are grouped into a data stream. In practice, one data stream corresponds to a group of typically 4 or 8 flash-ADC channels physically connected to one readout-PC (see Section 4 for details). In this way, the information of one event, that is, the complete collection of signals from all detectors for one proton pulse, is distributed among several data streams. A description of the data format is found in Section 5. Such a strategy reduces the readout time and preserves the scalability of the system without expensive hardware requirements, but makes on-line event reconstruction of all events

nearly impossible. Thus, the event reconstruction and physics analysis is performed at a later stage after the raw data have been stored on tape (and disk). The data in each stream are saved event-wise on a local temporary disk pool in the corresponding raw data stream files. A file size of 2 Gbytes is

Recorded data per burst [kB]

696

without zero-suppression

104 103

1 mm Au sample, with zero-suppression

102

6.3 mm C sample, with zero-suppression

10 Silicon detector, with zero-suppression

1 50

100 150 200 Number of bursts

250

300

Fig. 3. Data rate with and without zero suppression for a single channel at the n_TOF experiment. Without zero suppression, the complete memory content of the digitizers (8 Mbyte) has to be stored. With zero suppression, the data rate is reduced by a factor of 20 for a 1 mm Au sample using a C6D6 detector. Due to the much smaller (n,g) cross-section of carbon, the data rate for a 6.3 mm C sample is reduced by a factor of 100. The silicon detectors for monitoring the neutron flux contain a thin 6Li foil, resulting in a reduction factor of 400.

Fig. 2. Display of the data acquired by a selected channel: The top panel shows all data acquired during 16 ms. Each vertical line represents an event. The middle panel shows a close up with a length of 7 ms corresponding to an expansion of the x-axis by a factor of  2300: A further zoom by another factor of 10 focuses on a single pulse illustrating the zero suppression routine: This determines where a recorded signal exceeds a given threshold value. The part of the signal above threshold (good data) is complemented by a defined number of pre- and post-samples (black line). Special care has to be taken in cases where the number of samples between two threshold crossings is larger than the number of post-samples but smaller than the sum of pre- and post-samples. Such a situation requires joining both pieces of good data.

ARTICLE IN PRESS U. Abbondanno et al. / Nuclear Instruments and Methods in Physics Research A 538 (2005) 692–702

adopted for minimizing the data losses due to occasional file corruption. As soon as one stream file reaches the file size limit, all files are closed and new ones are opened. At present, the Input/Output ðI=OÞ speed of the first disk pool is 15 Mbyte/s, although new and faster disk servers reaching I=O rates of 40–50 Mbyte/s will be put in operation in early 2004. An EventDisplay program allows the raw data files to be read from the local disk pool and the digitized signals of all detectors present in the setup to be visualized. For this reason, the files are kept on disk as long as possible until the space is required for storing the new incoming data. After a raw data file is closed, it is immediately transferred via GigaBit ethernet to CASTOR (Cern Advanced STORage manager) [5], which is running the Central Data Recording (CDR) software [6]. The files arrive at a second disk pool from where they are migrated to tapes. The transfer rate in CASTOR is typically 15 Mbytes/s but can be easily upgraded depending on the experimental needs. In addition to the raw data streams, there is a so-called ‘‘index stream’’ containing additional information on the setup and slow control:





 



A table of content for the data streams, describing the position of each event in each data stream. This allows the software reading the data streams to find a given event and to access it directly without scanning the whole data stream. This feature is used, for example, in the EventDisplay of the raw data. The data concerning the proton beam, which is read via ethernet and recorded in the index file, e.g. for each pulse: the number of protons, the exact time, and the beam type. The sample currently measured, as well as the list of samples mounted in a sample exchanger. The count rate of three BF3 neutron monitors at the very end of the neutron flight path, which is registered on a Meilhaus ME1400 PCI card and readout by the DAQ software. The positions of the neutron filters which are read via the digital I=O port of the same ME1400 PCI card. The filters are a set of 10 different materials with black resonances which





697

can be remotely inserted into the neutron beam. They are used to completely absorb neutrons in a certain energy interval in order to investigate the background in these regions. The run description, a text containing the purpose of the current measurement (run) and other comments which the user should enter before starting a run. This information can be updated at any time during the run. The high-voltage settings for the detectors used in the current setup. This information is provided by a LabView program which remotely controls the high-voltage power supply. The information is send to the DAQ software via ethernet.

The index stream files are stored on tape and kept on disk as well.

3. The raw data processing After a raw data file is migrated to tape, it is mirrored to a disk pool with adequate read permissions. From there, it is read-out by the Data Processing Software (DPS) and processed. As mentioned in Section 2, no action is performed before tape migration occurs in order not to hinder the data flow due to excessive I=O operations. The raw data processing consists in the pulse shape analysis of the different FADC data sets corresponding to the detectors coupled to the DAQ, and runs in batch mode at CERN’s batch services LXBATCH [7]. Specific algorithms written in C are available for every individual detector type. The relevant signal parameters such as time, energy (defined by the signal amplitude or area) and quality factors such as pileup order, baseline value, baseline root mean square (RMS), saturation flag, and particle discrimination are extracted from the pulse shapes, formatted and saved as Data Summary Tapes (DSTs). The DSTs are stored on tapes and kept at the same time on disk. The data format of the DSTs is nearly identical to the format of the raw data files described in Section 5. The only difference is the replacement of the digital pulse shape buffers by the much smaller DST-parameter buffers: while the size of

ARTICLE IN PRESS U. Abbondanno et al. / Nuclear Instruments and Methods in Physics Research A 538 (2005) 692–702

a digitized signal is typically a few kB, the respective physics information (such as time and energy) can be stored in less than 100 bytes. The data reduction factors in the DSTs depend on the detector type and range from 10 to 100 typically. The DPS is written in C and has a hierarchy of three levels:







A first level consisting in a Launcher program which starts the second level and, after a user defined time, starts another replica of itself and terminates. A second level consisting in a Creator program that verifies that no other copy of itself is runnning, looks for new data files, verifies that none of them are being processed and starts in batch mode a user-defined number of third level jobs. A third level consisting in a Processor program that opens a new raw data file, reads in the flashADC data for each detector, applies the proper pulse shape analysis algorithms to each data buffer, formats the DST buffers, stores them on a file and, after termination, migrates the file to tape.

The DPS runs without interruption during the whole data taking period, creating a DST file a few hours after the raw data file was migrated to tape. In this way, a quasi-online monitoring of the measurements is performed with the full statistics accessible. A variety of pulse shape analysis algorithms is available for the different detectors used at n_TOF with specific features:







Monitors: Definition of the baseline from the average value of the presamples, calculation of the integral and amplitude of each signal after baseline subtraction, calculation of the TOF by means of a simulated discriminator or constant fraction discriminator. Fission detectors, PPACS: Fourier analysis and noise filtering, calculation of the amplitude and area of the filtered signal, TOF calculation by means of a simulated discriminator. C 6 D6 and BaF 2 detectors: Iterative baseline reconstruction, calculation of the amplitude and TOF of each signal by least squares fit of

the digital pulse shape to a reference pulse shape determined a priori. The pulse shape analysis methods will be described elsewhere [8,9]. The quality of the analysis for the particular case of the C6D6 liquid scintillators is illustrated in Fig. 4 by the comparison of the FADC output (in black) and the reconstructed signal (in red) which is obtained by an iterative baseline correction and a least-squares fit of the digital signals to a reference C6 D6 pulse shape. The reconstruction with the signal amplitude as a free parameter allows not only the correct time and amplitude of the signal to be extracted but also can account for the pileup and a distortion in the baseline (due to the PMT response) about 500 ns after each signal. This shows the advantage of the FADC concept in obtaining full control over the systematic uncertainties inherent in counting experiments. Based on the DSTs, the final cross section analysis is performed with much lower I=O transfer rates and less CPU power. A data analysis library (DAL) has been written in C for simplifying the task of building the DST-based analysis programs. Operations such as event building, low level data retrieval or data integrity checks are built-in features of the DAL, allowing

 

to select a set of files and detectors, to open the complete set of DSTs corresponding to a particular measurement,

250 flash ADC channel

698

200 full response of the PMT

150 100 digitised C6D6 signals

50

reconstructed C6D6 signals

0 636200 636400 636600 636800 637000 637200 time (ns)

Fig. 4. The digitised C6D6 signals compared to the reconstruction using a pulse shape fitting routine.

ARTICLE IN PRESS U. Abbondanno et al. / Nuclear Instruments and Methods in Physics Research A 538 (2005) 692–702





to manipulate the complete information corresponding to a particular parameter of one event i.e. time-ordered response of all detectors, beam and monitor information, and to implement the user-defined analysis algorithms on an event-wise basis.

4. The front-end electronics and DAQ software The data is acquired by DC270 and DC240 FADC modules manufactured by Acqiris [10]. These devices conform with the CompactPCI (cPCI) specifications [11] and are installed in a cPCI crate which may house up to four modules. The DC270 modules offer four data inputs plus one trigger input, a maximum sampling rate of 1 GHz and a bandwidth of 250 MHz. The DC240 modules offer only two data inputs plus one trigger input but a maximum sampling rate of 2 GHz and a bandwidth of 500 MHz. Both have a resolution of 8 bits. When the modules are triggered, they sample the input signal and store the data in an internal buffer. This buffer has a maximum size of 8 Mbyte (DC270) or 16 Mbyte (DC240). The sampling stops after a given amount of data have been acquired or when the internal buffer is full. After the sampling is completed, the data are transferred to a readout-PC which is connected to the flash-ADC modules via a cPCI to PCI link. Therefore, the flash-ADC modules are seen by the PC as local PCI cards. Accordingly, the data transfer from the flash-ADCs to the PC memory is limited by the 80–100 Mbytes/s transfer rate of the PCI bus. Due to this limit in the transfer rate, only a certain number of channels can be transferred into the readout-PC memory during the 1.2 s which is the minimum time accepted between two proton pulses. For example, the data transfer of 8 channels, each with a 8 Mbyte buffer, requires already 0.8 s at 80 Mbyte/s. For this reason each readout-PC serves typically 8–10 channels, which means that several readout-PCs and cPCI crates have to be used depending on the total number of channels. The PCs are operated under the CERN version of RedHat Linux which is regularly updated. For

699

the readout of the flash-ADC data the program PACQ has been developed which does the readout, the data compression, and the transfer of the compressed data to the disk array, which is used as a buffer. This program communicates with a central control software, PROD, which synchronizes all PCs dedicated for readout and collects and writes the ‘‘index stream’’. The communication is accomplished via UDP sockets. The readout-PCs as well as the PC running PROD are attached to the network via GigaBit links using copper cable as illustrated in Fig. 5. The program PACQ on the readout-PCs is controlled by a state manager. This means that the program is normally in one of the following states: Idle, Readout or Calibration. Readout is the state for acquiring data and Idle is used after the data acquisition is stopped by the user. The Calibration state is intended for a self-calibration of the flashADC modules. The state can be changed by PROD which sends a corresponding message to PACQ. While the system is in a given state, it calls repeatedly the corresponding state function, for example the Readout function which handles the data acquisition for a single proton pulse. If the state is changed, a transition function is executed. For instance if the user starts a measurement, the state is changed from Idle to Readout and hence the transition function IdleToReadout is called which configures the digitizers and creates new files for the storage of the data. The Readout function first receives the event number (the number of the next proton pulse) from PROD and then arms the digitizers. This means the digitizers are ready to acquire data and are waiting for the trigger signal. The trigger signal which is initially disabled by PROD, is only enabled after all readout PCs communicated to PROD that they have armed their flash-ADCs. After enabling the trigger, the next proton pulse will trigger the ADCs and also PROD which inhibits the trigger signal again. The inhibit of the trigger signal is needed to ensure that readout-PCs which finish the data processing first do not receive the next trigger already while other readout-PCs are still busy. This should never happen during normal operation since all PCs should be able to accomplish their data processing in 1.2 s, but it

ARTICLE IN PRESS 700

U. Abbondanno et al. / Nuclear Instruments and Methods in Physics Research A 538 (2005) 692–702

F-ADC Crate

Readout-PC (PACQ)

F-ADC Crate

Readout-PC (PACQ)

F-ADC Crate

Readout-PC (PACQ)







Information about beam, voltages, samples, etc.

Data Stream Control connection

Control-PC (Prod)

GUI (RunControl)

Disk-Server 750 GB

Castor Tape pool

Monitor-PC (EventDisplay)

Monitor-PC (Webserver)

Fig. 5. Schematic view of the n_TOF data acquisition system. The output produced by the detectors (left) is digitized by flash-ADCs over a period of typically 16 ms. The data is transferred via cPCI/PCI adapters into the Readout-PCs. These Linux PCs compress the data and transfer it over Gigabit links (bold black arrows) to the disk server. Here, the data is waiting to be transported to CERN’s tape pool CASTOR. An additional Control-PC synchronizes the whole system and acquires information about the proton beam, high voltages, samples, etc. A few Monitor-PCs have access to the data temporarily stored on the Disk-Server and hence allow a quasionline monitoring of events. The number of FADC/Readout-PC combinations can vary and is only limited by the bandwidth of the disk server.

ensures a stable system also during abnormal conditions. After the sampling of the input data has finished, the acquired data are transferred into the PC memory by PACQ, which also compresses the data, and sent to tape and disk pools as explained before. Both index and data streams contain the complete configuration of the data acquisition system which can be defined by the graphical user interface (GUI) RunControl. For each flash-ADC this includes: Identification of the detector connected to this channel, input range in volts, the number of pre- and post-samples, the threshold for zero suppression, sampling rate, time period over which the input signal was sampled, and other less important data. All the above-mentioned data, except the list of offsets, is accessible via internet on a dedicated web server. The user interface for the control of the data acquisition system is the program RunControl (Fig. 6) which runs on a PC in the control room and communicates with PROD via ethernet, as illustrated in Fig. 5. It shows the status information about the current run, e.g. the run number, the accumulated number of proton pulses in the

current run, the intensity of the last proton pulse, the integrated number of protons of the current run, and the number of protons accumulated on the current sample. Furthermore, it displays the amount of data acquired per channel and data stream and warns the user optically and acoustically if it detects a problem, e.g. if the beam unexpectedly stops or if a detector does not produce any data. A special window allows the user to configure the flash-ADC channels. In spite of the large number of channels, any combination of channels can be selected and common parameters can be set in a single step. The setup and continuous control of the detectors is facilitated by the possibility to display the recorded raw data. This is the task of the EventDisplay, which can access the data on the Disk Server, where they are temporarily stored. In this way, a quasi-online monitoring is possible. The EventDisplay is also able to create simple TOF, neutron energy and pulse height spectra in order to check the proper operation of the detectors. For example, this is helpful for the adjustment of the high-voltage settings of the C6D6 detectors via calibration sources.

ARTICLE IN PRESS U. Abbondanno et al. / Nuclear Instruments and Methods in Physics Research A 538 (2005) 692–702

701

words of the BOS header hold the bank name, the version of the header, a reserved word and the size of the remaining data in the bank. This system allows programs which are reading the tapes to skip unknown or uninteresting headers. It provides also a very robust format in case of errors, e.g. due to bad tapes. If the reading software encounters faulty parts in the file it can parse the file for the next valid BOS header and continue reading.

6. Summary

Fig. 6. Screenshot of the graphical user interface RunControl. In the top part, status information such as the intensity of the proton beam, the accumulated number of protons, sample in beam, etc. is displayed. Below, there is a simple text editor which is used to enter additional information about the current run and a display for status messages and warnings. The bottom part shows the data rate of each stream and the proton pulse history during the last minutes. It currently displays two proton pulses per super cycle. The column on the right shows the data rate of each channel (after zero suppression) during the last proton pulse. Detectors with zero data rate are highlighted by a red background.

5. Data format The n_TOF raw data format is based on the BOS bank system. BOS banks are logical records in the data file that consist of a four word header (1 word = 4 bytes) followed by a variable number of data words. For example, the data streams use the four bank types: The RCTR (RunConTRol) bank followed by general information about the run, the MODH (MODule Header) bank followed by the configuration data of the ADCs, the EVEH (EVEnt Header) bank followed by all data acquired at one proton pulse which includes one ACQC (ACQuisition Chunk) bank for each channel containing the acquired data. The four

The n_TOF data acquisition system was successfully launched in November 2000 and has since been improved in many aspects by a number of extensions. It represents a reliable, versatile and scalable solution with nearly zero dead time, based on 8-bit FADCs with sampling rates up to 2 GHz. The necessary high data transfer rates and the related needs for extensive storage and CPU access have been met by taking advantage of the CDR/ CASTOR massive storage and the LXBATCH computing resources at CERN.

Acknowledgements We would like to thank Ulrich Fuchs, John Lee Gordon, Harry Renshall and Tim Smith from the CERN-IT division and the CDR/CASTOR and LSF/LXBATCH teams for their kind assistance and help. This work was supported by the EC under Contract No. FIKW-CT-2000-00107 and the following national agencies:



Spain: Ministry of Science and Technology and ENRESA.

References [1] CERN n_TOF Facility: Performance Report, On-Line Access: http://www.cern.ch/n_tof, CERN, 2002. [2] T. Kihm, et al., Nucl. Instr. and Meth. A 498 (2003) 334. [3] Y. Kato, et al., Nucl. Instr. and Meth. A 498 (2003) 430.

ARTICLE IN PRESS 702

U. Abbondanno et al. / Nuclear Instruments and Methods in Physics Research A 538 (2005) 692–702

[4] Reifarth, et al., Background identification and suppression for the measurement of (n,g) reactions with the DANCE array at LANSCE, Nucl. Instr. and Meth. A 531 (2004) 528. [5] Castor Website: http://cern.ch/castor. [6] CDR Website: http://cdr.web.cern.ch/cdr/overview.html. [7] LSF/LXBATCH Website: http://batch.web.cern.ch/ batch/.

[8] L. Tassan-Got, et al., Pulse shape analysis routine for PPAC detector in (n,f) cross section measurements, in preparation. [9] D. Cano-Ott, et al., Pulse shape analysis routine for C6D6 detectors in (n,g) cross section measurements, in preparation. [10] Acqiris Website: http://www.acqiris.com. [11] PCI Industrial Computers Manufacturers Group Website: http://www.picmg.org/.