l4arch - data logger from L4 farm to archive
The Data Logger contains software and hardware to collect event data from L4 nodes and to store them on mass storage.Introduction
A multi-process environment was designed and implemented, using C/C++. The Arte process, running on a L4 node, writes events to a shared memory segment (Figure-1). The Sender processes on the same node reads events from the shared memory and sends them through the network, using tcp/ip, to a Receiver process which runs on the Logger Node. The Receiver process writes events to a shared memory segment. The Writer process, running on the Logger Node, reads events from the shared memory segment and writes them to disks to three streams: Full DST events, MINI DST events, Event Directory. The Archiver process runs on a dedicated node which has access to disks and to osm, saves full DST files and the Event Directory files to DESY Mass Storage System. The Stage process can be used for copiing files from Mass Storage System back to disk.
It is forseen to use a file
stack on DLT tapes in case when osm-Robot does not work (processes
All processes in this chain send monitoring messages (using rpm) to the Monitor process working on a dedicated computer.
The concept foresees a scalable system both for the sender side (up to a few hundred L4 Nodes) and the receiver side (more than one Logger Node). This allows to optimize performance by structuring the L4 farm in mini-farms, consisting of a few 10 nodes.
A GUI written in Tcl/Tk allows to monitor the logging procedure (Figure-2). One can also make a printout of Monitor's data via Print process.
To support needs of the Data Logger and of online and offline Arte, the package gpack was implemented which allows to read/write events from/to three types of streams: "shared memory", "disk", and "Event Directory" files. Fast event pre-selection in the offline job is possible via user provided SQL-like control statements.
There were made some mesurements of speed for different network protocols/packages used in HERA-B.
It was decided, that the DAQ-formatted event data (after the event building and before online Arte) should be logged. This has been implemented as an additional stream, so called DAQ stream.DAQ-stream
A user's process calls the function (l4aLogEvt), which sends events to DaqReceiver process using RMP-flood protocol.The DaqReceiver, which works on a Logger Node, puts received events into the Shared Memory. Another process, DaqWriter, gets these events from the Shared Memory and writes them to files on a disk. After that these files are moved to the OSM-Robot by the Archiver process.
This function (l4aLogEvt) has the following three arguments: a pointer to the event, size of the event and the pointer to the event header structure. The last argument has the following fields. These fields, except the classmask, are used for creation of a file name.
Files are written in the following
tree on the disk. Environment variable HB_ROOT_PATH
shall point to the root of this tree. One can see that the structure of
the DAQ subtree is similar to the structure of DST and MINI subtrees. The
same tree is created on the OSM-Robot.
One can find Data Logger executables in $HBROOT/DATAMGR/pro/$HBBINTYPE/bin directory.Processes
This process produces a listing of Run Catalogue.