A I M T U T O R I A L
Tom Hoeksma and Ad Herweijer
Delft University of Technology
Faculty of Applied Physics
Pattern Recognition Group
2628 CJ DELFT
The image processing software package AIM (Atari Image Manager)
has been developed at Delft University of Technology by Frans
Groen (the 'image processor') and was adapted for the Atari ST
computer in cooperation with Robert de Vries (the GEM program-
mer). Both were members of the Signal-/System-techniques Group in
the Faculty of Applied Physics, Frans as a senior staff member
and Robert as a graduate student. Recently, Frans changed jobs to
become professor of Applied Informatics in the University of
AIM is designed to be an entry level 'appetizer' for the art of
image processing. It gives the interested layperson an
opportunity to acquaint himself with the techniques and
terminology used in this dynamic field. The experiments
described may be regarded as a lower bound of what can be
accomplished with AIM, the upper bound being limited by the
user's imagination only.
This tutorial is not a full course on digital image processing.
Such courses are frequently organized by the Pattern Recognition
Group. For course information, please contact the Secretary of
the Pattern Recognition Group, Faculty of Applied Physics,
Lorentzweg 1, 2628 CJ Delft, the Netherlands, Phone 015-781416.
Note that the operations used in the examples and demos cover a
minor part of AIM's capabilities only. A comprehensive descrip-
tion of all the operations available is given in a file called
AIMANUAL.DOC. Like this tutorial it is contained in a compressed
file (DOCS_IMS.ARC), which can be found in the folder COMPRESS on
the distribution disk (together with a decompressing program).
Image processing is not particularly difficult; many people will
have had their first image processing experience in the
photographic darkroom. Note, however, that AIM, being dedicated
to digital image processing, requires some basic knowledge of
digital systems. So those of you who until now have successfully
avoided learning about bits, bytes and binary numbers, be
prepared to face the unavoidable! The 15 minutes or so spent on
reading about these concepts will be richly rewarded by a better
understanding of AIM's operations, if not by the deep admiration
of your environment for the newly acquired vocabulary.
AIM is a result of many years of research and development in the
Pattern Recognition Group. Many of the image processing routines
were contributed by (a.o.) Gert van Antwerpen, Frans Groen, Piet
Verbeek, Ben Verwer, Lucas van Vliet and Igor Weber. This
introduction was written (if not compiled from existing texts) by
Tom Hoeksma and Ad Herweijer.
Previous versions of AIM have already extensively penetrated the
Atari ST community. However, as the documentation supplied with
those first public domain versions was rather limited, the
exploration of the software has been a mere 'image processing
adventure' to most of the 'early birds'. The authors hope that
this introduction, together with the related examples and demos
included on the disk, will enable all prospective users (early
birds and newcomers alike) to better understand what was, resp.
is going on.
Feedback plays an important role in the effective use of image
processing as well as in software development. Therefore, we
invite you to send your eventual comments and suggestions to:
Prof. dr. ir. F.C.A. Groen
c/o Delft University of Technology
Faculty of Applied Physics
Pattern Recognition Group
Lorentzweg 1, 2628 CJ DELFT
WHAT CAN WE EXPECT OF IMAGE PROCESSING?
Maybe you have already had some experience with AIM or seen
somebody else work with it. You may have wondered, then, how
image processing could make features appear in the display that
weren't there before the operation. Text that at first was
unreadable, suddenly could be read easily; freckles in a girl's
face, that didn't show in the original display, could (to her
distress!) be counted after some appropriate keystrokes. You were
perhaps convinced to have acquired a little magician, hidden in
that tiny floppy disk.
Be warned, however! Image processing can never reproduce features
of the original image, that somehow have been lost. So, be
prepared for surprises, but don't expect miracles! Image
processing can enhance particular details in an image, but
usually other details will have to be sacrificed.
If 'A picture is worth a thousand words', image processing allows
you to arrange those words to make the picture reveal its
(secret) 'stories'. But remember: one picture's thousand words
are different from another's. Every story will inherit the
characteristics of the picture it was extracted from. The art of
image processing is: to make the picture tell the one story
that's useful to you.
STAGES IN AN IMAGE PROCESSING APPLICATION
In almost every image processing application we can recognize
some stages, that eventually lead to the final result: recording,
preprocessing, segmentation, postprocessing, analysis and inter-
Probably the most important stage is the recording of the
original picture. As stated above, information that gets lost in
the recording process can never be recovered. For the recording a
video-camera is required, capable of converting the light,
reflected from the object, into an electrical signal. As the
video signal cannot be read directly by the computer, this signal
must be sampled and digitized and the digital information must be
transferred in an orderly manner to the computer's memory. This
is done by a special piece of hardware, a so-called 'frame
grabber'. Various frame grabbers are available for the Atari ST.
In the Pattern Recognition Group monochrome and color frame
grabbers have been developed that eventually will be made
After the image is stored in the computer it must be pre-
processed. Here the user tries to enhance or filter out
particular wanted resp. unwanted features, to satisfy
requirements of further analysis. Examples of preprocessing are
the correction for changing grey values in the background
(shading), suppressing noise, determination of edges in the
In order to be able to determine properties of single objects in
the image, these objects must first be made detectable. If a grey
value image contains dark objects on a light background,
segmentation can, for instance, result in a binary image with
white objects in a black background.
Postprocessing can improve the result of the segmentation.
Examples are the morfological (binary) operations. Holes in
objects can be closed, edges peeled off, etc.
In the analysis stage, measurements can be performed on the
segmented objects. Examples: area, perimeter, curvature.
In the interpretation stage the results of the measurements are
An 'intelligent robot' in a factory, that has to make a decision
on whether he 'sees' a nut or a bolt, must be programmed to go
through all stages mentioned here. Programming such a robot
requires much intuition on how computers 'think'. An important
requirement for the programmer is to be able to forget the way he
would do the job himself! He must belong to the new breed of
SOME TERMINOLOGY USED IN IMAGE PROCESSING
Images and displays
In image processing we must distinguish between 'images' and
their 'displays'. In this context we define an image as the
information that has been stored (as 1's and 0's) with a
particular filename in the computer's memory or on a magnetic
disk. A display is the representation of such an image on a video
screen or -as a 'hardcopy'- on a piece of paper.
An important property of an image is its information content. The
information in an image depends on the presence of details
(lines, edges, intensities, colors, etc.). Moreover, the
information content determines the amount of computer memory
required to store the image.
Note that it's one thing to store an image accurately, but that
it's another to display it with full conservation of information!
The display hardware (video controller, monitor, printer, etc.)
determines the degree to which the information in an image can be
observed exactly. In image processing, there is a tendency to
make the information content of stored images higher than can
actually be displayed. This reduces the chance that the effects
of rounding errors in intermediate results will deteriorate the
final display. (Moreover, RAM-chips and floppy disks are much
cheaper than display hardware!). AIM, for example, uses 64
kilobytes (kB) of RAM for the storage of images, whereas the
Atari hardware can only handle 32 kB for the display of one
screen (either monochrome or color).
The basic element of an image is a pixel (picture element). To be
observable on a display, a pixel must be attributed a grey value
or color information.
As an example, consider the Atari ST. In the monochrome (high
resolution) mode the Atari ST supports the display of 2 grey
values only. Each pixel (the Atari screen has 640*400 = 256,000
of them!) has a number ('0' or '1') associated with it that
determines whether it is to be black or white: there is nothing
in between. We will call this a 'binary display'.
Note that terms like 'black', 'white' and 'grey value' in the
context of monochrome displays actually refer to the color
emitted by the phosphor used in the Cathode Ray Tube (CRT) of the
display device (amber, green, white, etc.).
Color images generally consist of three sub-images, displayed
simultaneously. The sub-images represent the decomposition of the
color image into three monochrome images (red, green and blue,
respectively). The (visual) addition of the sub-images restores
the original colors for the observer. Sub-images may be binary,
but usually they have more than two grey values. An AIM color
image, for example, has 256 possible grey values for each sub-
In the color modes (medium-, resp. low-resolution), the ST is
capable of displaying 8 grey values for all three primary colors.
Consequently, there are 8*8*8 = 512 different combinations, each
combination representing another color. Such a group of available
colors is called a palette. The palette represents all possible
states that an individual pixel may have.
Each combination of red, green and blue intensities is perceived
with a particular brightness, the luminance. Luminance
information can be used for the display of color images on a
monochrome monitor; AIM allocates an extra sub-image for it.
If a pixel has equal intensities for red, green and blue , it
will display the color 'grey'. Thus, in the palette of the med-
res and lo-res color modes there are 8 levels of grey, extending
from black (R=G=B=min=0) to white (R=G=B=max=7). Very often the
property of a color monitor to display grey values (as opposed to
just black or white) is more important to the user than its
capability of displaying colors!
As we have seen above, in binary images each pixel can have 2
grey values only. We say that each pixel has 1 'bit' (binary
digit) of information associated with it. Equivalently, in color
images the information per pixel is higher; AIM uses 32 bits of
color and grey value information. We will return to this shortly.
Pixels and neighbors.
An AIM image can be visualized as a 'checkerboard' with 256*256
or 128*128 square fields (for monochrome and color images,
respectively). Each field corresponds to a pixel carrying 8 bits
(= 1 Byte) of grey value information or 32 bits (= 3+1 Bytes) of
color and luminance information.
Pixels, being neatly arranged in a square raster of rows and
columns, have neighbors. The number of neighbors of a particular
pixel is referred to as its connectivity. On a checkerboard
neighbors can be defined as having either just sides, or sides as
well as corners, in common with the reference pixel. Depending on
these definitions the connectivity of pixels on a square raster
is 4 or 8. Pixels sometimes are arranged hexagonally. In such a
'honeycomb configuration' the connectivity is 6.
Image processing routines can be divided into three groups:
point, local and global operations.
Point operations use the information of one single pixel in the
input image to calculate the result for a corresponding pixel
(not necessarily the same) in the output image. An example is the
INVERT operation, where the intensity bit for every pixel in the
(binary) image is reversed.
In a local operation the result for one single pixel of the
output image is determined by the original pixel and its
neighborhood. An example of such an operation is Lmin ('Local
minimum'), where the output pixel gets the minimum grey value of
its neighborhood (including itself) in the input image. This
operation extends dark parts in the image. The neighborhood of
interest can often be specified (i.e. 1*3, 3*3, 5*5, etc.). In
AIM the neighborhood can only be an odd number.
In global operations the result for one single output pixel is
derived from the information of all pixels in the input image.
One example of a global operation is the Fourier transform, used
to determine periodic features in the image. Fourier transforms
are not supported in this version of AIM.
PROPERTIES OF THE ATARI ST VIDEO HARDWARE
Almost all of AIM's images will probably have to be displayed on
a video monitor (either monochrome or color). However perfect an
image may be, it will eventually have to pass the bottleneck of
the computer's video display hardware. Much of the effort of
programming AIM has been devoted to overcoming inevitable
limitations of a computer's display hardware. For a correct
interpretation of the results, we will extend briefly into some
details of the hardware used by the Atari ST for the control of
displays: video display memory, video controller and monitors.
The video display memory.
The central part of the display hardware is the video display
memory. Independent of the resolution chosen (lo-, med-, hi-res),
the Atari ST allocates 32 kB (= 256,000 bits) of RAM for storage
of the information of one screen.
In the high resolution mode 640*400 = 256,000 pixels can be
displayed, leaving just one bit/pixel for grey value definition.
As a result only binary images can be displayed on a monochrome
As explained above, a color image requires more than 1 bit of
memory per pixel. You may wonder how the ST manages to attribute
more than 1 bit to a pixel if in the hi-res mode (1 bit/pixel)
all the available 32 kB is used up already!
The answer to this question can be intuitively found from the
terms 'medium-resolution' and 'low-resolution' for the color
modes. In these modes, the spatial resolution of the hi-res mode,
640*400 = 256,000 pixels, is sacrificed in order to give the
pixels more luminance (monochrome displays on color monitors) or
color information. By reducing the resolution to 640*200 =
128,000 (med-res) or 320*200 = 64,000 pixels (lo-res), the pixels
can be attributed 2 or 4 bits for luminance or color definition.
This allows palettes of 4 or 16 colors to be defined in the med-
res or lo-res mode, respectively.
Thus, the video display memory may be thought of as representing:
1 binary image of 640*400 pixels (hi-res)
2 binary images of 640*200 pixels (med-res)
4 binary images of 320*200 pixels (lo-res).
The color of a particular pixel is determined by the (weighted)
bit-values of the 2/4 corresponding pixels in the binary images.
The video controller.
The next stop on our journey from the video display memory to the
monitor screen is the video controller. This chip takes care of
all the housekeeping, required for conversion of the bytes in the
video display memory to 'real world' voltages, compatible with
the intensity input(s) of the monitors.
Since in the hi-res mode the pixels contain just 1 bit of grey
value information, the video controller incorporates a very
simple 1 bit 'A/D converter' to generate the intensity signal for
the monochrome monitor.
We have seen that, as a consequence of the limited 32 kB of video
display memory, in the med-res and lo-res modes the pixels can
have just 2/4 bits of color information. This would allow for
palettes of 4/16 colors. Fortunately, Atari has made the ST much
more versatile, generously including three 3 bit A/D converters
in the video controller. Thus, for each primary color there are 8
grey values available, allowing a palette of 8*8*8 = 512 colors.
If the pixels contain information for the definition of 4/16
colors only, how does the video controller know what colors to
choose from the palette of 512? The answer is in the operating
system; it has facilities to define a 'sub-palette' of 4/16
colors. Each color in the sub-palette represents a particular
combination of grey values for red, green and blue . These
combinations are stored in a lookup table of 4/16 elements. The
2/4 bit color information for each pixel refers to a position in
this lookup table and the video controller converts the RGB-
combination stored there into the corresponding intensity control
When a color monitor is used to display a monochrome grey value
image, the operating system automatically loads the lookup table
with 4/8 (why not 16?) grey combinations for med-res or lo-res
Basically, a video monitor consists of a CRT with some electronic
circuitry to control the deflection and the intensity of the
electron beam. A monochrome monitor uses one electron beam. A
color monitor requires three beams, one for each primary color.
For intensity control the monitors have 1 or 3 inputs. The
voltage on these inputs determines the intensity of the lightspot
on the screen.
Note, that in principle both monochrome and color monitors are
capable of displaying grey values: there is a continuous range
over which the intensities can be controlled. However, as we have
seen, the other display hardware reduces the number of possible
grey values and/or colors considerably.
The deflection of the electron beam(s) is controlled by the
monitor itself; for synchronization it needs a triggering signal
('sync') which is produced by the video controller and simply
added to the intensity signal(s).
AIM IMAGES AND THEIR DISPLAY ON THE ATARI ST MONTORS
The AIM workspace allows simultaneous storage of 4 images: A, B,
C and D. Each image (monochrome or color) occupies 64 kB. A
monochrome image (stored on disk with the extension .IM) consists
of 256*256 pixels, having 8 bits of grey value information (i.e.
256 grey values possible). Color images (extension .COL) consist
of 128*128 pixels, having 24+8=32 bits of color and luminance
information. For the display of color images AIM supports the
ST's lo-res mode only (320*200 pixels with a palette of 16
You may have wondered why AIM uses 4 Bytes of information for
pixels of .COL images. Wouldn't a color image be perfectly
defined by just using 1 Byte for each color sub-image? The answer
is: yes, it would. For technical reasons, however, we have chosen
to add en extra byte for luminance definition. This format
allows, for example, the monochrome display of .COL images.
We will discuss the display of AIM images in 4 different
Monochrome (.IM) images - monochrome display (hi-res)
Color (.COL) images - monochrome display (hi-res)
Monochrome (.IM) images - color display (lo-res)
Color (.COL) images - color display (lo-res)
Monochrome (.IM) images - monochrome display (hi-res).
As an extension to our checkerboard model we might introduce a
stack of 8 checkers placed on each field, representing the 8 grey
value bits of the corresponding pixel. A white checker represents
a '1', a black one a '0'. Depending on its position in the stack
the weight of a white checker (i.e. the contribution to the grey
value) is 128, 64, 32, 16, 8, 4, 2 or 1. Thus, if the lower three
and the upper two checkers are white, the corresponding grey
value is (128+64+32+0+0+0+2+1=) 227.
In our model we have 8 horizontal planes of 256*256 checkers,
each plane contributing its own weight to the total image. Note
that each plane itself actually corresponds to a binary image,
hence the name: bitplanes. The upper plane (b1) is the least
significant bitplane, the lower plane (b8) is the most
significant one. In AIM the 8 bitplanes of a grey value image can
be processed as 8 separate binary images. Bitplanes of one image
can be copied (command: BCOPY) to one or more arbitrarily chosen
bitplane(s) of an other. Using this facility, a grey value image
can be used as a (temporary) storage for 8 independent binary
As determined by the ST hardware, in the hi-res mode only binary
images can be displayed. AIM circumvents this limitation by using
a trick called sigma-delta technique: the 256 grey values
available in the image are represented as well as possible by dot
densities in the display. This yields a 256*256*1 bit display
with a reasonable suggestion of grey values to the human eye.
A special situation occurs if we want to process monochrome
pictures from the drawing package DEGAS. Monochrome DEGAS
pictures (extension .PI3), being binary 640*400 pixel images,
cannot be processed directly by AIM. When read from a disk these
images are converted to a 320*200 pixel image with 5 levels of
grey information. The 640*400 pixels are grouped in 320*200 non-
overlapping 2*2 fields. Each pixel in the final 320*200 pixel
image is attributed the average grey value of the corresponding
0 pixels white -> grey value: 0
1 pixels white -> grey value: 63
2 pixels white -> grey value: 127
3 pixels white -> grey value: 191
4 pixels white -> grey value: 255
Subsequently, the converted DEGAS images get the same sigma-delta
treatment as the regular .IM images for the display of grey
values. Note that in this process information is lost; the
resulting image has less than 320*200*3 = 192,000 bits of
information (originally: 256,000 bits). Consequently, even though
AIM can re-convert (loosing information again!) these images back
to DEGAS format, the original pictures can never be recovered
Color (.COL) images - monochrome display (hi-res).
For the display of a .COL image on a monochrome monitor, only the
128*128 pixel luminance sub-image is used. The sigma-delta
technique again converts this luminance information to dot
Monochrome (.IM) images - color display (lo-res).
Monochrome (.IM) images have 8 bits (256 levels) of grey value
information. When an (.IM) image is read from disk in the lo-res
mode, AIM will load the 16 color palette with the maximum of 8
levels of grey and applies the sigma-delta technique to
interpolate between these 8 levels.
Color (.COL) images - color display (lo-res).
For the display of .COL images in the lo-res mode AIM first
determines an optimum palette of 16 colors from the palette of
256*256*256 colors available in the original image. The 8 bits of
luminance information is not used.
The palette for a particular screen is derived from the image in
the active window only; this may not be the optimum palette for
other images displayed simultaneously.
In this mode color pictures from the DEGAS and Neochrome drawing
packages (extensions .PI1 and .NEO, resp.) can be read by AIM
also. Again, size reduction (to 160*100 pixels), grey value
attribution and the sigma-delta technique are applied for the
processing and display of these images.
IMAGE PROCESSING CAPABILITIES OF AIM
The image processing software package AIM has been written in the
C language, using the Mark Williams C-compiler. It makes
extensive use of the GEM window structure for the user interface
and the display of images. AIM allows processing and display of
monochrome, binary and grey value images as well as of color
images. It has an interface (via the Atari ST's DMA bus) to
monochrome and color frame grabbers, developed in the Pattern
The user interface.
Image processing operations can be chosen from drop-down menues,
using dialog boxes, as well as by typing the appropriate command
in the command box. In any case the commands are transferred as
strings by the Window Management System to the Command
Interpreter. Many of the AIM commands cannot be invoked from
drop-down menues. They will be designated by an asterisk (*).
The command interpreter starts the image processing routine. It
is not necessary to type the complete command name; any number of
characters that doesn't give ambiguities will do.
Sequences of commands may be combined in a command file or macro.
Command files can be nested to 16 levels. They are automatically
created by activating the Logging feature; any operation you
perform after 'Logging' is checked, will be logged. Macros can be
saved to disk for re-use. The extension for the filename is .AIM.
A macro is executed by typing the name (without extension),
preceded by the character '@'. Clicking the menu-entry Macro and
the name in the dialog box will do the same. Pausing the
execution is possible by hitting the Esc-key. Proceed with the
macro by hitting Return or stop by clicking in the appropriate
The commands in the Command Window can be edited simply. Delete
and backspace work like in most wordprocessors. Control-X deletes
a whole line.
All characters typed after a star (*) will be ignored by the
command interpreter and thus can be used for comments.
Typing a question mark (?) and will display a list of
available operations. The syntax and a short explanation for the
commands can be found in Help-files. Choose HELP!!! in the
Utilities menu. Note, however, that this version (2.5) of AIM has
been extensively revised. Many operations have been added and
some of the existing operations allow more parameters to be
specified than those given in the current Help-files and dialog
boxes. Fortunately, all operations are upward compatible. The
AIMANUAL.DOC in the COMPRESS folder contains an updated
description of all the available operations.
EXAMPLES AND DEMOS
The examples on the AIM disk are self-explaining. The only
prerequisite is a reasonable understanding of the previous
chapters. Because the disk space was limited, we could only
include some of the most relevant examples. Consequently they
cover just a few of the possible operations; most of the
operations will have to be explored by yourself!
The commented command files EXAM1.AIM -> EXAM5.AIM have been
included so that you can check your manually entered exercises.
Besides the prepared examples three extra demos (macros) have
been included on the disk: DEMOCER.AIM, DEMOHFE.AIM and
DEMOSCH.AIM. They contain some operations and strategies that may
be of interest to you after having completed the examples
Finally two exercises are added to (self-)test your abilities. If
things go totally wrong, don't feel embarrassed by peeping into
the macros PRAK1.AIM and PRAK2.AIM; they contain possible
solutions for these exercises. The exercises are compressed files
and can be found in the folder COMPRESS.
In case you missed a point: in a grey value image 'black'
corresponds with a value 0, 'white' with a value 255. In a binary
image (or a bitplane of a grey value image) these values are 0
and 1, respectively.
All examples and exercises will work in both modes, lo-res color
and hi-res monochrome. For binary operations a monochrome monitor
is recommended. Some demos are more impressive in color, though!
Operations used: Histogram, Equalize
First we will retrieve the image MOON.IM. Click the entry 'Read
Image' under the 'File' menu. The defaults in the selector box
that pops up ('Window: A' and 'STANDARD') are acceptable, so
click 'OK' (or hit the Return key). Those of you who prefer a
command line interpreter might as well type in 'read A' instead
of using the mouse.
Next a GEM 'ITEM SELECTOR' appears. Choose 'MOON.IM' and after a
few seconds a picture appears in Window A, that shows the foot
of moonlander surounded by moondust. You will agree that,
normally, you would throw such a picture in the trashcan and
probably declare the mission to the moon a flop. AIM can help you
deal with such frustations!
Just to get a feel of the power accumulated in that tiny 3.5",
try the operation 'Equalize' under the menu 'GreyOps'. Choose A
as the input image and B as the output. Looks better, doesn't it?
Remember, the information was already in the image; even without
AIM the mission to the moon wasn't in vain! But the application
of AIM certainly adds to the succes.
To understand what has happened to the original image, try the
entry 'Histogram' under 'Utilities'. This will show a graph of
the frequency of occurrence of a particular grey value (or
rather: a small range of grey values) in the image, plotted
against the grey value. Frequencies are plotted vertically, grey
values (0 -> 255) horizontally. The histogram indicates that the
grey values in the original image were concentrated in a
relatively small range (centered around grey value 50).
From the histogram we can conclude that the grey scale from 0 to
255 has not been used efficiently; nearly all pixels have a grey
value between, say, 25 and 75. The result is that, in a way, we
are looking at the printing on a toy balloon; only if we inflate
the balloon some of the smaller details will show. If somehow we
could stretch the original grey value range, so that 25 would
shift to 0 and 75 to 255, the same effect might be imposed on our
moon image. Throwing away the ranges 0 ->25 and 75 -> 255 doesn't
do much harm, because little information will be lost.
Equalization is the air pump that inflates a grey value range.
Unlike an air pump (and the suggestion in the previous
paragraph!) it does so intelligently. That is, the stretching
effect on a particular range of grey values depends on the height
of the histogram for that range. Ranges with high values in the
histogram will be expanded at the cost of the compression of less
prominent ranges. Note, that there may be multiple peaks and
valleys in the histogram. To see what we mean, determine the
histogram of image B and compare it with the original one. The
frequencies (vertical lines) are divided more evenly over the
available range of grey values.
Ranges with high frequencies of occurrence in the original
histogram have been stretched and those with low frequencies
compressed. If you are not completely satisfied, perform the same
operations on the image TRUI.IM, which has multiple peaks in the
Equalization is a nice example of image processing: important
features (in this case grey value ranges) are emphasized at the
cost of less important features. Yet, in this particular
operation no information is lost; a 'de-emphasis' procedure (not
available in AIM) could restore the original image.
Hint: if you want to get rid of the histogram in a particular
display, use 'Gdisplay' in the Utilities menu.
Operations: Gradx, Grady, Enhance
Read the image TRUI.IM in (the default) Window A. This image has
become a kind of reference for many groups involved in image
Perform the operation 'Gradx' in the GreyOps menu. Accept all the
defaults proposed by AIM. Worry about the Multiply and Add values
later, or better still, experiment with them.
The Gradx ('gradient in x-direction') operation determines the
rate at which grey values change in the original image and
attributes corresponding grey values to the pixels of the output
image. An output pixel gets a grey value according to the
grey-out = (gradient-in) * Multiply/1024 + Add.
This operation shows vertical edges in the image. An edge
corresponds to a step in grey value. Note the effect on edges in
the girl's hat.
Now perform Grady. This time send the output to Window C. Note
the result for horizontal edges (mouth, eyebrows, etc.).
A related operation, which is generally used to sharpen-up
pictures is the Enhance-operation (in menu GreyOps). It
determines the second derivative (in both directions) of the
grey-values in the input image. Like Gradx/y it emphasizes high
frequencies in the image. High frequencies occur at sharp edges.
Try the Enhance filter on the original TRUI.IM. and enjoy the
improvement. But what did we do to her eyes? Seems we created her
Operation: Median filter
Read image BNOISE.IM. This image has intentionally been degraded
by 'shot noise'. This type of noise occurs often in the
transmission of signals. Some of the pixels differ very much in
grey value from the surrounding pixels. In this case they are
white. We want to use filtering to remove the noise.
Usually it is difficult to predict the effects of filtering.
Various parameters must be specified, requiring much intuition.
Try, for example the entry 'Filter' in the GreyOps menu!
A good filter for the problem of shot noise is the so-called
median filter. In this operation AIM determines the grey values
in 3*3 neighborhoods for each pixel in the input image. These
values are sorted in tables with 9 entries. The 5th entry in each
table (the 'median') becomes the grey value of the corresponding
output pixel. The grey value of a shot noise speck is likely to
be in the extremes of the tables in which it appears, and thus it
will be removed.
Perform the operation 'Median' in the GreyOps menu. Only a few
stubborn noise specks will survive. They may be removed by a
repeated application of this filter.
Operations: Copy, Lmin, Lmax, (*)mindev, Gdisplay
Once more we call on TRUI.IM; read it from disk. We will need a
second copy of this image in Window B. There are two options:
a. Read the image again from disk and have AIM put it in Window B
b. Use 'Copy' in menu GreyOps to copy Window A in Window B.
Note that there is a 'Copy' entry in the BinOps menu too. This
binary operation copies only bitplanes, as opposed to a grey
value copy, which copies all 8 bitplanes of an image at once.
If all went well, you now have the fabulous Trui twins on screen.
The first operation to be performed is 'Lmin' in the GreyOps
menu. This operation looks for the lowest grey value in an n*n
neighborhood (n to be specified by the user) of the input pixel
and attributes it to the corresponding output pixel. This filter
has a tendency of expanding darker areas in the image. Try it
with n=9 and put the result in Window C. Experiment a little with
different values for n. Observe the effects on nostrils and eyes!
In any case end with a 9*9 Lmin-ned TRUI.IM in Window C.
Next apply the 'Lmax' filter on Window A. Choose n=9 and put the
result in Window D. This filter expands lighter area's in the
image. We hope that until now you acquired enough intuition to be
able to predict the effect: gone are those nostrils and gone is
that mouth in the 'lighter shade of pale' that surrounded them.
But now the real image processor arises. We are going to use an
operation that uses three input images: (*)mindev C,D,A. This
operation compares corresponding pixels (i.e. pixels with the
same coordinates) in the original image (A) and the two other
images (C and D). The output pixel gets the grey value of the
corresponding pixel in C or D, depending on which one of the two
is closest to the grey value in A. Big deal, but what is the
Let's try it first. The (*)mindev ('minimal deviation')
operation cannot be invoked from the menu, so we have to type it
in de Command Window: type mindev C,D,A . The result will
be shown in Window A. Compare it with the original image, which
is still luring in Window B. Apparently the edges in the image
(cheeks, hair) have been sharpened. The net effect of Lmin, Lmax
and mindev is, that near edges there is a backlash in grey value.
In other words, grey values are reluctant against changes;
changes occur suddenly as a function of position.
Operations: (*)Clear, Threshold, Invert, Propagation, Bdisplay,
Reset, Exor and (*) Measure.
Read the image CERMET.IM from disk.
Our purpose with this example is to do some measurements on the
objects in this image. The '(*)Measure' operation requires the
objects of interest to be white in a black background. In the
menu 'BinOps' we have some useful operations at our disposal to
achieve that goal.
In order to be able to apply binary operations ('BinOps'), we
first will make the grey value image in Window A binary. The
operation 'Threshold' in the GreyOps menu makes pixels with a
grey value below the threshold black and pixels with a grey value
above it white. When the operation is invoked, it proposes a
threshold which is the average grey value in the image; it is not
necessarily the best guess, but usually it yields satisfactory
Stop! The Threshold operation will ask for the bitplane where the
binary image is to be stored (it defaults to b1 in Window B).
However, as our previous exercises will have left some garbage in
the other bitplanes of Window B, we first have to clear those.
The operation '(*)Clear' is provided to clear all bitplanes of a
window at once (clear = grey value 0 = black!). Type: clear B
Now we are ready for the Threshold operation; click on it in the
GreyOps menu and accept the defaults.
As we need white objects in a black background, the picture in
Window B must be inverted. Click on 'Invert' in the BinOps menu
(accept the defaults).
Note that some of the objects near the frame of the window are
only partially displayed. We don't want the '(*)Measure'
operation to take these incomplete objects into account, but how
can we get rid of them? How can we distinghuish them from
complete objects? The answer is: use the 'Propagation' operation
(in CelOps) .
The 'Propagation' operation allows us to isolate objects. White
pixels (called 'seeds) will 'grow' (expand) under the condition
that in a given reference bitplane (the 'mask') the corresponding
pixels are white too. We can visualize this operation as taking
place in a field with seeds (= white pixels that we want to grow
to shapes present in the mask). The reference bitplane (the
'mask') is the culture medium; it has fertile and infertile
regions. If a region is fertile (i.e. it has white pixels that
coincide with the seed pixels), the seed can grow. In one
'iteration' (propagation step) a pixel grows in accordance with
its connectivity: one white pixel is added in all connecting
directions. In AIM the seed bitplane is called CLP (Cellular
In the next iteration the seed bitplane (CLP) that resulted from
the first iteration is compared once more with the mask bitplane.
Coinciding white pixels in CLP and mask will, once more, grow.
This will continue until all seeds have grown to the edges of the
fields in the mask; we have created an exact copy of all the
fields that in the first iteration coincided with at least one
seed. The trick of isolating particular fields in the mask is to
create a CLP with seeds that coincide with those fields.
Let's return to our problem of isolating the incomplete white
objects at the borders of bitplane b1 in Window B. It would be
nice if we had a bitplane available (to be used as the CLP) with
white pixels along the frame only. The Propagation operation
would make those border-pixels grow to the exact shapes of the
intersected objects in the mask (in our case: b1). The result
would be: two images having the unwanted objects in common.
Subsequent application of the logical operation 'Exor' (in the
BinOps menu) on these two images will finally remove the unwanted
objects. Exor is the inequality operator: it compares
corresponding pixels in both (binary!) images and outputs a white
pixel if (and only if) they are different.
Now that we hopefully understand it we cannot wait to put our
knowledge to the test. The only image we need is bitplane b1 in
Window B. Click on Propagation. We will have to change almost all
of the defaults! Our mask bitplane is b1 instead of the proposed
b2. Furthermore, we will use b2 as our CLP instead of b1. The
connectivity is not important, but change it to 8. The necessary
white edge for our CLP can be chosen with the entry 'Edge value'.
Choose 1 (for white). Make the number of iterations 100; that is
too many, but AIM will stop automatically when all growth has
Check all entries in the Dialog Box: if something is wrong there
is a risk that you have to start all over again! Your ST won't
complain, but we don't want to frustrate prospective image
processors in such an early stage.
Hit Return. If all is well, you will see an image in bitplane b2
of Window B that has the intersected objects only. Wrong? Don't
worry, just keep trying. Don't forget to clear bitplane b2 prior
to every Propagation operation.
Next invoke operation Exor (in BinOps). Use b1 and b2 in Window B
as inputs and b3 as output.
At last our image is ready for the (*)Measure operation. This
operation assumes that the image is in b1 of Window B, so we must
copy b3 to b1 first. Click 'Copy' in BinOps and choose the
(*)Measure performs a number of measurements on the objects and
displays the results in the Command Window. It assigns numbers to
the objects (left -> right, up -> down) and labels each object
with a grey value according to this number. This can be observed
in Window D.
Type measure (mea will do!) . Note that the Command
Window can be sized to bring all measured data on screen.
Study the image PRAK1.IM. Your assignment is to separate the nuts
from the bolts. In other words: use AIM to generate two images,
one containing the nuts (including the one that rests on its
side!) and the other one the bolts. Count the objects afterwards.
The image PRAK2.IM shows a handful of electronic components. Try
to separate them. Use your fantasy and surprise husbands, friends