Vision Research Center

Investigators

News from the VRC

VRC Administration

VRC Facilities

Related Penn Sites

esources

Vision Science Seminar

 

UPenn - Vision Research Center Image Analysis Example Projects

 

March, 2014 Cell Density Mapping Project

For this project Dr. William Beltran wanted to produce density maps of ganglion cells from confocal images of whole retinas. The process of segmenting the cells and using their coordinates to produce a three dimensional density map was hindered by the very large size of the images. In order to process such large files a hardware upgrade to the dedicated workstation at the School of Dental Medicine Center for Cellular Imaging was planned and implemented by VRC image analysis module. The ganglion cells of a dozen flat mount retinas were segmented with the microscope’s Nikon Elements software. The Elements program produces data files which employ an unusual Japanese character coding scheme which required devising a pre-processing method before the files could be read by any other software. The data was then processed by a custom Python script that binned and plotted the ganglion density of the whole retina in a single image.

For details refer to:
Beltran WA (School of Veterinary Medicine), Cideciyan AV, Guziewicz KE, Iwabe S, Swider M, et al. (2014) Canine Retina Has a Primate Fovea-Like Bouquet of Cone Photoreceptors Which Is Affected by Inherited Macular Degenerations. PLoS ONE 9(3): e90390. doi: 10.1371/journal.pone.0090390

 

November, 2013: e-ROP Fundus Photograph Viewer

e-ROP Fundus Photograph Viewer

The image analysis module developed a graphical user interface (GUI) for fundus photograph graders at the Scheie Image Reading Center to view and grade fundus photographs from the e-ROP study coordinated by Dr. Quinn and Dr. Maguire. This tool provides an Open File Dialog Box for the user to locate the directory and open all photographs captured for a patient. These images are then laid out as small icons on left with an annotation name shown right. The user can click on any of the icons in order to get a bigger view of the corresponding image, with controls provided in order for the user to change image appearance by switching color/grayscale views, or changing image brightness and contrast. A view at the original image resolution can be popped up for the user to change image appearance in an alternative way. This tool can synchronize all views of an image when user changes its appearance. Moreover, the path of the images is shown in a box at the bottom of the GUI. A functionality of resetting all parameters to default is also provided both for each image independently and all images together. Please contact Yuanjie Zheng (zheng.vision@gmail.com) if you have a potential application for this or a similar application.

****

November, 2013: Landmark Matching Based Retinal Image Alignment/Registration

Landmark matching to create spatial correspondences between two FA retinal images

The image analysis module developed a novel landmark (crossing points or bifurcations of vessels) matching based automated intra and inter-modality retinal image alignment/registration tool, which has been tested on images from both fundus photography and fluorescein angiography (FA) provided by Dr. Maguire. Image alignment/registration is a process of establishing spatial correspondences between two retinal images and can be used to transform these two images into a common coordinate system. It is fundamental to applications as diverse as tracking the progress of retinal disease, detecting locations of scars and burns, mosaicing (or constructing montages) to provide a wider view of the retina, fusing images from different modalities and creating tools to assist ophthalmologists during layer surgery or other procedures.

****

October, 2012: Ribbon/Punctum Segmentation and Matching

This project facilitated the counting, pairing and intensity measurements of structures in the retina for Dr. Noga Vardi. Confocal images of mouse retina were obtained with key protein structures in the light transduction cascade, ‘ribbons’ and ‘puncta,’ typically stained green and red respectively. These stained proteins accumulate in either dendritic tips of ON-bipolar cells or in the synaptic terminals of photoreceptors. When imaging a pair of wild type (WT) and knockout (KO) retinas the ratio of intensity of staining can provide a reasonable approximation of the differing expression and localization of these proteins in normal WT and in KO animals.

A MATLAB script was created to segment the ribbons, and then segment and match any puncta located in close proximity to each ribbon. A GUI was created provide manual curation of the identified objects. The GUI features included addition and deletion of single and paired ribbons, image zoom, object marker toggling, and an area adjustment used in calculating the intensities of the puncta. Coordinate data from each ribbon identified in the images is then saved to an Excel spreadsheet along with punctum coordinates and intensities, and a calculation of the ratio of paired to unpaired ribbons.

****

October, 2012: Analysis of Espion ERG Data

This project involved parsing numerical data from multiple data files produced by the Diagnosys Espion ERG system. Dr. Gus Aquirre's and Dr. Andras's Komaromy research uses six different protocols each resulting in a separate a CSV data file. In addition to the channel 1 and 2 data the CSV files contain a superfluous amount of unneeded data. In the past this data had been manually copied, plotted and exported into PDF format. This process was so overly tedious and time consuming that a small fraction of the available data was ever plotted and examined.

Using MATLAB a more efficient method was created to open and load any available data files, and then loop through the data parsing out the channels 1 and 2 information. Additionally the ability to filter and rescale the data was incorporated into the script. The data is then plotted and labeled by protocol name and exported to PDF format.

****

October, 2012: Analysis of Mouse ERG Data

The data from Dr. Jean Bennett's and Dr. Arkady Lyubarsky's mouse electroretinography (ERG) recordings are produced as numerical values in text files. Analysis of these experimental results required time consuming repeated manual selection and plotting of small sequences of records for each of the stimuli present in each data file.

A MATLAB script was created to read the selected data file, match event times with the asynchronous data records, create right and left eye plots for each of the stimuli, and calculate the mean and standard deviation reference values for each event. Then the amplitude of the response of each stimuli and total averaged response for left and right eyes were calculated. Additionally if single sided stimuli are presented, average response amplitudes for each of the four possible permutations are determined. All data and plots are then programmatically saved in an Excel file along with the any information from the supplementary additional files created by the ERG recording equipment.

****

May, 2012: Analysis of Pupillometry Data

The image analysis module developed automatic methods for analysis of
pupillometry data.

Data produced by Dr. Jean Bennett’s pupillometry equipment is saved as
numerical data in CSV format. Dr. Bennett was manually transferring
this data to Excel, selecting the proper data, and plotting it to find
the maxima and minima for each of the eyes of each subject. Then the
pupil amplitudes and pupil change velocity were calculation. This was
a time consuming procedure and because of this, it was not possible to
examine all of the available data.

We created an easy-to-use Matlab program with a graphical user
interface. The program parses the data files, identifies outliers and
plots the data, and calculates the summary measures. The user is able
to use the plots to check that the automatically generated numbers are
sensible, and to make corrections in cases where the algorithm
misidentifies the maxima and minima. The results are then saved to a
spreadsheet in an investigator friendly format.

This program could be generalized to handle other similar types of
data. Contact Richard Zorger (zorger@mail.med.upenn.edu) if this is
of interest.

****

March, 2012: Organization of Image Database

The image analysis module developed a system for organizing a large database of patient images for Penn's Comparison of Age-Related Macular Degeneration Treatments Trials (CATT). This system will enable more systematic analysis of these images
for future studies.

****

January, 2012: Automated Detection of Drusen in Fundus Images

Drusen detection

The image analysis module developed an algorithm for automated detection and counting of drusen from fundus images provided by Dr. Stambolian. An example is shown in the image. The left view shows an unmarked fundus image. The right view shows the same image with three concentric rings around the (automatically detected) fovea, and with the detected drusen marked in blue. This work was used to produce preliminary data for a grant application submitted by Dr. Stambolian. Further development of the algorithm is ongoing for a second set of fundus images collected as part of Penn's Complications of Age-Related Macular Degeneration Prevention Trial (CAPT), with the ultimate goal being to allow automated grading of fundus image and automated extraction of statistics on the size and distribution of the drusen. The segmentation algorithms developed for this application may have other applications for vision research. Please contact Yuanjee Zheng (zhengyuanjie@gmail.com) if you have a potential application.