Particle Physics Group



Return to Parent page

SuperNemo Prototype DAQ System






Running The System.


This website gives technical details of the SuperNemo Prototype Readout used on the 90Cell and other drift tubes.


H1FTT Board

The basic hardware is a reuse of the H1 FTT system. This system comprises of a VME based 30 channel 100Mhz FlashADC board which can be controlled by a standard VME Host CPU. In our case this is a regular PentiumM based PC running a reasonably recent version of Linux (Mandriva 2008.1 at the last install). This PC has no hard drive and so is reliant upon a "Network" Boot host, which provides it with the boot system and storage. Again, a pretty regular Linux machine. With 2 network cards, one for a private network, the second connected to the normal network. The two machines are also linked via a serial console cable, to allow for low level access and debugging of the VME host.

Booting the system should show the VME host requesting a DHCP lease, then loading the kernel over and finally booting into linux. This is entirely automatic. Shuting down the system however is not trivial. You need to log into the VME host, tell it to shutdown and only then can you power off the crate. Under now circumstances just power things off, as that could lead to disk corruption of the network mounted disk. In situations where things have crashed, it is advisable to try everything else to recover a login before resorting to power cycling the crate.


In general the firmware is not loaded into the boards on power up. It is controlled by a few software commands that are usually performed during the initialisation of the system. Boards with loaded firmware will show flashing LEDs behind each FPGA. A dim constant LED indicated no firmware loaded, and the board will be unresponsive to anything except very basic functions.

There are 3 FPGAs:

VME Interface (Always works)

ADC Front End (Software loaded)

Memory Back End (Software loaded)

The Front and Back End is where all the main work is done. The firmware in these is customised and can be altered, but does take rather a long time to make any modifications.

Flashing new firmware into the board takes about 20 mins, and is best performed by someone who knows how it work. Several steps are needed to generate the correct format, all requiring access to computers inside the DAQLab.

Each FPGA has a date and revision counter, so that software can examine which firmware version is loaded.

The latest firmware resides in SVN at DAQLic SVN where it can be checked out via standard subversion tools, or via the web interface. The code is designed to be compiled with Synplify, and then this will call the correct Altera tools. Although it is possible to import it into Altera Quartus directly. Once built the Altera tools can be used to generate the prom images

If the prom is programmed directly, via a pod, you need to set some jumpers on the board, which can be found in the schematic, otherwise its easier to do it via software with the tool's in the libh1supernemo package.

Memory Map

A full list of all registers and the addressing scheme is shown on the following web page:

Current Memory Map

In general memory access's should be done via the provided driver software, and that will insulate users from the rather complicated memory layout of the board.

Event Format

In Memory events are stored in a block form, each channel of each FPGA is allocated a range of memory for its events. There are 2 buffers per channel, a primary and secondary. The idea is that when you start to read out, you switch them over to allow seamless continuation of event recording.

Events consist of a Header, followed by ADC data that is post trigger in time, then a block of ADC data that is pre-trigger in time.


Post trigger

Pre Trigger

Fixed at 16 bytes

Determined by data length register

Fixed at 128 bytes

The header is defined as the following.

Channel ID

Post Length

Pre Length

Event No.

Counter H

Counter M

Counter L



1 Byte

2 Bytes

2 Bytes

2 Bytes

2 Bytes

2 Bytes

2 Bytes

2 Bytes

1 Byte


2 to 3

4 to 5

6 to 7

8 to 9

10 to 11

12 to 13

14 to 15


The index below is the byte offset into the memory. The system is not ideal, and hopefully you would not have to deal with this directly. There exists code already inside SQuiD to do the unpacking already. However, if during debugging you are accessing the board directly this is the format. Event ADC data is then just large blocks of 8 bits number, stored and accessed in pairs to make the normal 16bit that the hardware operates on. Again, the code can unpack these.

Direct memory access to events in the memory is done via 2 index registers. A High and Low address. The high address is used to select which channels buffer you want to read out, and the low address causes the hardware to pre-load 8 16bit words from the DAQ ready for readout. The usual sequence is as follows.

Load High address with index of channel.
Load Low address with address of the event start.
Read out the 8 DAQ words, and load a new Low address.
Repeat until you have accessed enough words.

SQuid has code to do this, but it may be useful to know how it is done in case you need to debug any problems.



This low level VME driver is provided by the machine's makers (MEN) and is located in /opt/menlinux. I have modified the compile scripts to allow it to build on modern linux kernels and it should work on anything upto 2.6.29 currently. Mapping of the VME space into main memory is handled by this driver at run time, and there are some tools for doing diagnostics and other operations directly on it. In general, apart from debugging there is no need to get involved in any of the software at this level.

When logged into the VME Host, you can test for the VME driver being loaded by “cat /proc/devices | grep vmeand seeing if vme4l exists.

All the code from MEN is here.

Instructions for building are here.


The main software interface consists of a standard Linux shared library. This package contains three main parts. The first is the low-level VME interface libraries from the VMEDriver, the second is the libh1supernemo wrapper around these drivers allowing you to access the boards in a more controlled environment. Finally there are some example/demo/utilities that use the library to do various tasks. Some of these are simple tests, others are more useful, like flashing new firmware into the board, or even a full diagnostic system that allows you register level access to the boards with a simple text based UI.

The code for the library is in CVS and accessible via DAQLab SVN In general I will install/maintain and debug the library code. But if your interested in how things work, or where registers are etc. This is a good place to look. The test applications included are also a good way to see how to do things if you want to interface directly to the library.

Linking against the libh1supernemo library is done using the standard Linux automake/autoconf system, and it provides the package-config files to allow this. An example of how to do this is given by the main snemo_h1daq system, which does exactly that.

The latest version of the library now includes functionality to handle the Service Module Card. This is used to synchronise multiple FEM's in starting and event clock timing. This module always appears are board number 0 in the system, and there is also a flag inside the data structure to identify it. However, you need to be careful reading out registers, as they are different to normal. Ideally you will never need to access this directly. The library will initialize things and had functions to handle Sync/Start functions.

This version also handles the new VME interface, and allows for the usage of direct memory mapped IO to the VME, but this has the side effect of less error control on it. So, is ideally only to be used when you are very sure there is hardware at the correct address and so best not used for general IO, only time critical parts.

Logo for SQuiDSQuiD prototype DAQ.


This is a simple prototype DAQ system for reading out a set of boards in a semi user-friendly kind of way. The UI is based on ncurses and uses the libh1supernemo for low level access. It will run over a ssh link without any problems. A simple menu allows you to do various operations and some status panels show what is going on.

Installation Instructions.

Underneath it is a bit more advanced. There are 3 separate processing threads. The Ui is the main controlling one, and there are DiskIO and VMEIO threads to help make things easier to understand.

The VMEIO thread sets up the boards, configures them at start of run and then polls them at a regular interval to check for events. These are then read out, stored in memory and passed to the DiskIO thread.

DiskIO is where the events are pulled from memory, sorted and stored in some simple text files that consist the event storage. They are ASCII text, and easy to understand.

The program reads a simple text config file, which defines what boards are expected, what directory to use to save data to, what directory contains each boards configuration file and a maximum run size for each event file to be.

# A test config
RUN_DIR = /tmp/squid_data
BOARD_CONFIGBASE = ./testconfigs
RUN_MAXSIZE = 0x8000
# External Run start
# ExternalSync
# ExternalTrigger
# External Inhibit
# Readout Trigger Inhibit

The board configuration files are more complex, and consists of values for each of the registers inside the FPGAs that is needed to configure to a working system. An example configuration is here. These files can be generated by hand, or you can use the libh1supernemo's functions to read out board and then edit the file.

The latest version handles the Service Module, and can use that to sync start a run over multiple boards without problems. It should also be noted that you no longer need a config file for board 0, as that is the Service Module.

Event Format

SQuid stores events in the “run directory” inside several ASCII text files

  • last_run_number.dat

    • This simply holds the number of the last run that was performed.

  • %n_run_index.dat

    • This holds two entried.

    • FILES %n : For the total number of data files in this run.

    • EVENTS: %n : For the total number of events in the run.

  • %n_run.dat%n

    • These are event data files. Stored in ASCII with some Text headers to mark events.

      • #EVENT means a new event starts

      • #HDR means a decoded header

      • #DATA means raw event data, which is already converted into bytes.

      • #EOF means the event ends.

    • The decoded Header is as follows

      • Board Number

      • Channel ID

      • Post Trigger sample count

      • Pre Trigger sample count

      • Event ID

      • Counter High Word

      • Counter Middle Word

      • Counter Low Word

      • Trigger

    • The raw #DATA also contains the header, but in the packed form mentioned in the hardware section. The header is followed by the pre-trigger data, and then the post trigger data. The trigger point is then easily found by looking n bytes into this raw data array.

        An example event structure can be found in this directory

Some example C++ code that works in root is present here. It is however very basic and not very good. It does work, and the event loading and structures can be used by anyone without much changes.

Running the System.

The SBC sits on a private network that is accessible by logging into “” and logging on to “snemosbc.snemo” from there.

#> ssh -X

[snemodaq@razzle ~]$ ssh -X snemodaq@snemosbc.snemo

The initial connection to razzle is done using ssh-keys for security. So if you need to access the system you need to forward your public key to me so that I can add it to the list of authorized keys. If you don't have one, or don't know what that means then please come and ask.

Next change to the correct directory

[snemodaq@snemosbc ~]$ cd /usr/local/snemo/SQuiD

And you can start the DAQ system with

[snemodaq@snemosbc ~]$ snemo_h1daq --config testconfig.conf

One of the main things to remember is that when you wish to power off the Crate with the ADCs and SBC board you must first shut down the SBC. This is due to the fact it remotely mounts it's disk image, and just turning it off will corrupt this image and possibly render the system unusable.

Shutting down is easy.

[snemodaq@snemosbc ~]$ sudo /sbin/poweroff

It will then take ~20 seconds to fully shut down. After which the rack can be powered off.

Last modified Thu 10 September 2009
Switch to HTTPS . Built with GridSite 2.2.6