Several years ago, I worked on an imaging application. This system was to
capture data using a charge coupled device (CCD) image sensor. We were
capturing 1024 pixels per scan. We had to capture items moving 150 inches
per second at a resolution of 200 pixels per inch. Each pixel was converted with
an 8-bit ADC, resulting in 1 byte per pixel. The data rate was therefore
150 _ 1024 _ 200, or 30,720,000 bytes per second.
We planned to use the VME bus as the basis for the system. Each scan from the
CCD had to be read, normalized, filtered, and then converted to 1-bit-per-pixel
monochrome. During the meetings that were held to establish the system architecture,
one of the engineers insisted that we pass all the data through the VME
bus. In those days, the VME bus had a maximum bandwidth specification of 40
megabytes per second, and very few systems could achieve the maximum theoretical
bandwidth. The bandwidth we needed looked like this:
Read data from camera into system: 30.72Mbytes/sec
Pass data to normalizer: 30.72 Mbytes/sec
Pass data to filter: 30.72 Mbytes/sec
Pass data to monochrome converter: 30.72Mbytes/sec
Pass monochrome data to output: 3.84 Mbytes/sec
If you add all this up, you get 126.72 Mbytes/sec, well beyond even the theoretical
capability of the VME bus back then. More recently, I worked on a similar
imaging application that was implemented with digital signal processors (DSPs)
and multiple PCI buses, and one of the PCI buses was near its maximum
capability when all the features were added. The point is, know how much data
you have to push around and what buses or data paths you are going to use. If
you are using a standard interface such as Ethernet or Firewire, be sure it will
support the total bandwidth required