Introduction

We contributed to two papers originating from Richard Benton's group published in 2011:

Complementary function and integrated wiring of the evolutionarily distinct Drosophila olfactory subsystems.
Silbering AF, Rytz R, Grosjean Y, Abuin L, Ramdya P, Jefferis GS, Benton R
J Neurosci31p13357-75(2011 Sep 21)
An olfactory receptor for food-derived odours promotes male courtship in Drosophila.
Grosjean Y, Rytz R, Farine JP, Abuin L, Cortot J, Jefferis GS, Benton R
Nature478p236-40(2011 Sep 28)

mapping the projections of olfactory projection neurons postsynaptic to IR family olfactory receptor neurons. In order to do this we reanalysed existing image data from 4 publications:

Wong AMet al.Cell109p229-41(2002 Apr 19)
Jefferis GSet al.Cell128p1187-203(2007 Mar 23)
Yu HHet al.PLoS Biol8p(2010 Aug 24)
Chiang ASet al.Curr Biol21p1-11(2011 Jan 11)

to expand the map of Projection Neuron axons in the Mushroom Body Calyx and Lateral Horn that we had first developed in

Jefferis GSet al.Cell128p1187-203(2007 Mar 23)
.

The full analysis depends on

  • simple preprocessing
  • … and registration of the image data
  • tracing of neurons in the image data
  • import of this data into R for analysis
  • export of processed neuronal tracings for visualisation in Amira

The preprocessing/registration depends on free (and open source with the exception of IRTK ) software and requires installation of a large amount of software. Tracing was carried out using Felix Ever's plugin for the commercial Amira program. The main analysis happens in R and should depend only on R packages. Final visualisation for Grosjean et al was carried out in Amira while for Silbering et al we made simpler visualisations directly in R.

Methods

Excepted straight from Grosjean et al.

Registration

Projection neuron images (275) from four different sources were brought into a common reference space for analysis [22,38–40]. The common template (Cell07) was from ref. 22, so 236 projection neuron images from that study did not require further processing. The anti-Discs large channel of 34 confocal images from ref. 40 was registered to a single female template brain (FlyCircuit ID 6475, FC6475), which was chosen for good quality staining and imaging, using the CMTK toolkit (http://www.nitrc.org/projects/cmtk) as described previously [23]. The FC6475 template brain was then registered to the Cell07 brain using landmarks registration based on 24 manually chosen anatomical landmarks visible in both anti-Discs large and nc82 staining. This landmarks registration used the pnreg command of the IRTK toolkit (http://www.doc.ic.ac.uk/~dr/software), which uses nonlinear third-order B-spline registration41. The fiducial registration error, measured as the root mean squared deviation, was 5.1 µm after affine registration and 2.4 µm after warping registration, corresponding to per axis accuracies of 2.9 and 1.4 µm, respectively. The final per axis registration accuracy for independent landmarks (not used during the registration) was 3.3 µm, which is comparable to that of previous studies22,24. Three VL2a projection neuron images from ref. 39 were registered to the Cell07 template by choosing 12–14 landmarks in the nc82 channel present in each stack. Landmarks registration used the pnreg command of IRTK. Two VL2a projection neuron images from ref. 38 were registered to the Cell07 template using 10–14 landmarks. Although those brains had weak nc82 staining, the only reliable marker was GH146-GAL4-driven expression of a CD2 reporter. We therefore used a two-channel confocal image of an nc82-stained brain with GH146-driven mCD8:GFP expression that had been registered by way of its nc82 channel to the Cell07 template to help choose landmarks. Landmarks registration again used the pnreg command of IRTK. The fiducial registration error (root mean squared deviation) for the five brains varied between 3.6 and 4.9 µm (affine) and 1.8 and 3.1 µm (warp).

Tracing and Visualisation

Neuronal tracing was carried out in Amira (http://www.amira.com) with the hxskeletonize plug-in42. The three-dimensional coordinates of neuronal tracings were then transformed from their respective original image coordinates using the CMTK gregxform tool (FlyCircuit images → FC6475 template) and the IRTK ptransformation tool (FC6475 → Cell07, three VL2a images → FC6475). The underlying tools were called using custom code written in R (http://www.r-project.org), which was also used to analyse the transformed neuronal tracings (using the AnalysisSuite R package)22. The final visualization was carried out in Amira using Tcl scripts to load, show/hide and colour neuron tracings and to take snapshots. R and Tcl source code, landmark files, template brain images and neuronal tracings will be available on publication at the Jefferis laboratory website (http://flybrain.mrc-lmb.cam.ac.uk).

Analysis of axonal overlap

The degree of overlap between the axon terminals of different projection neurons was measured by making use of the three-dimensional convex hull of each terminal (using the Qhull library43 as exposed by the R geometry package). It may help to note that for a set of two-dimensional points in the plane, the convex hull can be obtained by stretching an elastic band to encompass the whole object and then releasing it. To obtain an overlap score for a projection neuron pair (A,B), we calculated three convex hulls, H(A), H(B) and H(A,B), where H(A,B) is the convex hull for all of the points in A and B. We then calculated a normalized overlap score as S(A,B) = (H(A) + H(B) − H(A,B))/(H(A) + H(B)). If A and B are identical (that is, complete overlap), the score will be 0.5. Less overlap will result in lower scores, with negative scores when the amount of intervening space between the axon terminals exceeds the overlap (if any). We first calculated an overlap score for all 275 neurons in the data set. We then aggregated the scores for all pairwise combinations of the 46 projection neuron classes in the data set and calculated the median score for each combination. This resulted in a symmetrical distance matrix with 46 × 45/2 = 1,035 unique off-diagonal entries. The median overlap score for VL2a and VA1lm projection neurons was 0.224, which was at the 97.7th centile: that is, the overlap between these two classes was among the strongest 2.3% in the data set. VL2a was the neuronal class with the strongest overlap with VA1lm (and vice versa). Cluster analysis of these data was performed with Ward’s method, using the hclust function of R. Overlap scores, s, were converted to a distance, d, suitable for clustering by the simple transform d = 0.5 − s. The VL2a projection neuron image data were obtained in separate studies from the bulk of the projection neuron data, so it is natural to ask whether this could have had some effect on our analysis. However, the consistent results for VL2a projection neuron images obtained from two different laboratories argue very strongly against this possibility. Visual inspection indicated that there was excellent overlap between the VL2a neurons from the two sources, and the pairwise overlap scores between and within the two groups were almost identical (median 0.284 and median 0.301, corresponding to the 98.0th and 98.8th centiles in Supplementary Fig. 4b). As an additional check, we repeated the axon overlap analysis and clustering with affine registered VL2a neurons (affine registration is more robust, although it is less accurate in the ideal situation). This had a negligible effect on the VL2a/VA1lm median overlap score (0.197 and 0.224, corresponding to the 96.2th centile and 97.7th centile, respectively) and had no effect on the clustering results (G.S.X.E.J., data not shown).

IN PROGRESS

  • Prepare R data package for download
  • Code
    • To repeat analyses and generate most figures, you will need an R installation including a number of additional packages required by a code library and some scripts specific to this project described in the next 2 bullet points
    • Analysis for this project depends on general purpose code that I have packaged at R AnalysisSuite. This requires some installation as described on the github page
    • To start looking at these neurons, you can do the following in an R session
      source("~/projects/AnalysisSuite/R/Code/Startup.R") # adjust according to install location
      install.packages("rgl")
      load(url("http://flybrain.mrc-lmb.cam.ac.uk/si/grosjean11/MyNeuronsFCIR.rda"))
      plot3d(MyNeurons,WithNode=FALSE)
    • To run the clustering:
      load(url("http://flybrain.mrc-lmb.cam.ac.uk/si/grosjean11/MyNeuronsFCIR.rda"))
      load(url("http://flybrain.mrc-lmb.cam.ac.uk/si/grosjean11/allchulloverlaps.rda"))
      
      bigdf<-attr(MyNeurons,'df')
      
      # now analyse these data
      # throw out LHNs and funny PNs and also one really poorly registered WongWang VL2a
      bigdf.sel=subset(bigdf,PNType%in%c("iPN","mPN") & Brain!='1718A')
      bigdf.sel$Brain=as.character(bigdf.sel$Brain)
      bigdf.sel$Glomerulus=factor(bigdf.sel$Glomerulus)
      allchulloverlaps.norm.sel=allchulloverlaps.norm[bigdf.sel$Brain,bigdf.sel$Brain]
      
      
      # calculate median overlap per glomerulus
      medianoverlap=matrix(ncol=nlevels(bigdf.sel$Glomerulus),nrow=nlevels(bigdf.sel$Glomerulus))
      colnames(medianoverlap)=levels(bigdf.sel$Glomerulus)
      rownames(medianoverlap)=levels(bigdf.sel$Glomerulus)
      for(i in levels(bigdf.sel$Glomerulus)){
      	for(j in levels(bigdf.sel$Glomerulus)){
      		medianoverlap[i,j]=median(allchulloverlaps.norm.sel[
      			bigdf.sel$Glomerulus==i,bigdf.sel$Glomerulus==j])
      	}
      }
      
      hc=hclust(as.dist(0.5-medianoverlap),meth='ward')
      hc$labels=sub("VL2pP","VL2p+",hc$labels)
      plot(hc,sub='',xlab="Ward's method",main="Cluster by Normalised Overlap Score",cex=.7)
    • Amira visualisation code (including pointer to AnalysisSuite scripts)
  • Image Data
    • Pointers to Image data from Cell 2007
    • Links to LSM image data from http://flycircuit.tw
    • Obtain permission to post additional data stacks from Jing/Allan and Sam/Tzumin

Log In