This is one of the most vexing issues in dealing with image data.

When dealing with 2D (or higher dimensional) image data, we need to specify the transformation between some kind of world coordinates and the image voxel data. The basic stuff that we need to know is:

  • pixel array dimensions (for the i, j, k … image data dimensions)
  • voxel spacing (or more generally the physical direction vector corresponding to movements along the i, j, k image data vectors)
  • coordinate origin

But this is not all, because there is a choice to be made about how to interpret the image data locations. The NRRD format refers to this as node vs cell. Basically:

  • node = samples are located on image grid points
  • cell = samples are located between image grid points

cell implies looking at the pixels/voxels as is they are little squares/cubes (see A pixel is not a little square for Alvy Ray Smith on this issue). node does not try to specify the physical extent of the sample but merely the spacing at which samples are acquired. For confocal microscopes, node arguably makes more sense, because the voxel spacing is not the same as the point spread function of the microscope (though you can get the microscope to suggest a Z step based on the estimated PSF).

The choice of node vs cell also has an impact on what you think about the physical image bounds. For an image dimension with n pixels at intervals dx

  • cell ⇒ physical dimensions are n * dx (dx = pixel width in this axis)
  • node ⇒ physical dimensions are n-1 * dx (dx = pixel spacing in this axis)

In my own code I call these different things

  • Bounds (cell definition above eg ImageJ)
  • BoundingBox (node definition above as for Amira)

Here is some code which makes the point:

	# nb BoundingBox = CENTRES of outer voxels (like Amira)
	if(!is.null(attr(b,"BoundingBox"))) return(attr(b,"BoundingBox"))
	else if(!is.null(bounds) && !is.null(voxdim)){
		if(is.vector(bounds)) bounds<-matrix(bounds,nrow=2)
		# zap small gets rid of FP rounding errors
	else return(NULL)


If you don't think about origin/bounding box issues carefully it is horribly easy to end up with offset issues. This is most likely when:

  • using software with different conventions
  • downsampling images


  • most software hasn't thought about this issue too hard but generally thinks of images as composed of little squares/pixels i.e. cell to the extent that this comes up. ImageJ falls in this camp because it reports bounds attributes, but it has rather poor support for spatial origins.
  • Amira is node based
    • You can tell this because of the BoundingBox attribute it uses
    • and how it displays image data (in an orthoslice view you only see 1/4 of the corner pixels, because the rest falls outside the bounding box)
    • the amiramesh file format includes a BoundingBox parameter which specifies the minimum and maximum extents in each axis; this therefore allows specification of an origin (the location of the first sample).
  • NRRD/unu can cope with either node or cell
    • it defines a space origin saying: This single vector gives the location of the center of the first sample in the array (the one whose value is given first in the data file, or with the lowest memory address)
      • i.e. the centre of the first pixel in cell mode and the position of the first node in node mode
    • downsampling assumes cell based if nothing is specified
    • downsampling a cell image (e.g. x2) will result in
      • an origin shift iff an origin was specified in the input file
      • a change in the space directions that is not an exact multiple of the downsampling factor
    • downsampling a node image (e.g. x2) will result in:
      • no change in origin
      • a doubling in the space directions field
  • CMTK is ?
    • it will write a space origin in an output nrrd iff the target image contains a space origin field
    • what does it think internally (e.g. for interpolation during reformat)?

Log In