To reconstruct the electron wave exiting the object plane (object wave) from a hologram, two steps are necessary . First, the complex image wave has to be regenerated from the hologram and then the microscope errors can be corrected by deconvolution. This paper deals with the first step.
We describe a reconstruction process in real space using an artificial neural network. This is done because streaking artefacts known in Fourier space can be avoided. Neural nets are establishing as a powerful tool in image processing. In our case, a neural net has been trained to perform a pointwise calculation if the image wave from the fringe patterns in small hologram areas (superpixels)
The Reconstruction Procedure
The neural net gets the intensity information from a superpixel containing 7 by 7 pixels as input data. This area contains about 1.5 hologram fringes.
The task of the net is to calculate the image wave in the center of the superpixel. Real and imaginary part of the image wave are represented by the two outputs.
To reconstruct the whole image, the superpixel is shifted over the hologram and the image wave is calculated pixel by pixel. This takes about 2 minutes on a DEC workstation for a 1024 by 1024 pixel hologram.
The Neural Net
We used a feed-forward neural network and obtained the best results with two hidden layers. This network was simulated and trained using the Stuttgart Neural Net Simulator (SNNS) .
Generating the Training Patterns.
The net can only generate proper results when it is trained with patterns similar to real world holograms. The training patterns were generated by simulating hologram patterns of superpixel size using the interference equation.
In order to have realistic patterns two disturbing effects are taken into account.
First, the image wave is not exactly constant over the superpixel area. This was simulated by allowing to have a certain, constant slope choosen at random for each training pattern.
Second, the inevitable noise was simulated by dis-turbing the calculated intensities with poisson noise.
Neural Nets are an attempt to imitate the flexible and massively parallel information processing in biological nerve system in a computer. The net consists of simple units, called neurons, which are connected by links with variable weights.
The net can be implemented as parallel hardware, but is usually just simulated on a conventional computer.
A single Neuron just sums the data from ist inputs, multiplied with weights and then applies a nonlinear sigmoid function to the result.
In Feed-Forward Neural Nets, the links form no feedback loops i.e. information is passed from the layer of input units through some layers of hidden units to the output units. See the picture at the top of this poster.
FF-Nets can be trained to react to input signals with certain output signals and are particulary suited for image processing tasks.
In nature, similar structures can be found. For example, visual information is preprocessed in a similar way in the retina.
Weights are numbers which determin the strengths of the links between neurons. Weights can be positve or negative,
corresponding to exciting or inhibiting synapses as in biology.
The weights contain all the information the net has learned.
Training Patterns are input vectors with known outputs. From these examples, the net learns how to react on the input data. When trained properly, the net generalizes and gives sensible results for input patterns not contained in the training set. As the generalisation is a kind of interpolation, such behaviour is only possible when the training patterns are distributed over the whole space of possible input vectors.
Backpropagation is the most popular learning algorithm for feed-forward nets. First, the weights are initialized to random values. Then, the input vectors of the trainigs patterns are subsequently applied to the net and the computed output is compared with the desired output. The weights are changed to decrease the error. This is done by calculating the gradient of the error as a function of the weights.
The algorithm was independently invented several times but began to be widely used only after the publication of .
How to deal with Noise?
Real holograms always contain a certain amount of noise. As the training patterns should cover the whole space of possible input vectors, the training patterns should also contain noise. On the other hand, the net should also be trained with noiseless data to avoid disturbance of the outputs by both input and training noise.
The adjecent plots show the learning behaviour of the neural net.
After each training cycle, the net is tested with a validating patterns set and the mean square error of the net output is calculated.
The upper plot shows that a net which is trained with noise makes big errors when tested with noiseless data. When testing with noisy data however, the net trained without noise produces big errors, as shown in the lower plot.
The best way to train a net in order to deal with both cases is to start the training with very noisy patterns and then gradually decrease the noise to zero.
Where does the Amplitude Information come from?
The amplitude of the image wave appears twice in the equation for the intensities:
The first, nonlinear part is not modulated by the carrier and therefore appears in the centerband, while the second term shows its effect in the sideband.
When and are treated as independent variables,
the influence of both terms on the reconstructed amplitude
can be seen. The plots show that the neural net tends to use more sideband information than a least-squares-fitting approach .
Result of the Real Space Reconstruction in Fourier Space
- Feed forward neural nets can be trained to extract the complex image wave from a hologram.
- The reconstruction process is carried out in real space.
- No far-reaching artefacts are produced by the borders of the image.
- Both center- and sideband information are used for amplitude
- The computation time needed to reconstruct a hologram is smaller than for least-squares fitting with numerical minimum search.
- Fourier space reconstruction and optimized analytical least-squares fitting  are even faster.
References:  Tonomura, A.: Electon Holography North-Holland 1995
 Wassermann, P. D.:Neural Computing: Theory and practice ISBN 0-442-20743-3
 Zell, A.: Simulation Neuronaler Netze. Nachdr. 1996. ISBN: 3-89319-554-8 -ADDISON-WESLEY,
 Rumelhart, D. Hinton, G. E. and Williams, R. J.: Learning internal representations by error propagation
in Parallel distributed processing vol. 1 pp. 318-362 MIT Press, Cambridge MA
 Meyer, R. and Heindl, E.: Optimized Reconstruction of Electron Holograms in Real Space by Least Squares
Fitting Poster T14-11on this Conference.