start page | rating of books | rating of authors | reviews | copyrights

Java AWT

Previous Chapter 12
Image Processing
Next
 

12.5 ImageFilter

Image filters provide another way to modify images. An ImageFilter is used in conjunction with a FilteredImageSource object. The ImageFilter, which implements ImageConsumer (and Cloneable), receives data from an ImageProducer and modifies it; the FilteredImageSource, which implements ImageProducer, sends the modified data to the new consumer. As Figure 12.1 shows, an image filter sits between the original ImageProducer and the ultimate ImageConsumer.

The ImageFilter class implements a "null" filter that does nothing to the image. To modify an image, you must use a subclass of ImageFilter, by either writing one yourself or using a subclass provided with AWT, like the CropImageFilter. Another ImageFilter subclass provided with AWT is the RGBImageFilter; it is useful for filtering an image on the basis of a pixel's color. Unlike the CropImageFilter, RGBImageFilter is an abstract class, so you need to create your own subclass to use it. Java 1.1 introduces two more image filters, AreaAveragingScaleFilter and ReplicateScaleFilter. Other filters must be created by subclassing ImageFilter and providing the necessary methods to modify the image as necessary.

ImageFilters tend to work on a pixel-by-pixel basis, so large Image objects can take a considerable amount of time to filter, depending on the complexity of the filtering algorithm. In the simplest case, filters generate new pixels based upon the color value and location of the original pixel. Such filters can start delivering data before they have loaded the entire image. More complex filters may use internal buffers to store an intermediate copy of the image so the filter can use adjacent pixel values to smooth or blend pixels together. These filters may need to load the entire image before they can deliver any data to the ultimate consumer.

To use an ImageFilter, you pass it to the FilteredImageSource constructor, which serves as an ImageProducer to pass the new pixels to their consumer. The following code runs the image logo.jpg through an image filter, SomeImageFilter, to produce a new image. The constructor for SomeImageFilter is called within the constructor for FilteredImageSource, which in turn is the only argument to createImage().

Image image = getImage (new URL (
     "http://www.ora.com/images/logo.jpg"));
Image newOne = createImage (new FilteredImageSource (image.getSource(),
                               new SomeImageFilter()));

ImageFilter Methods

Variables

protected ImageConsumer consumer;

The actual ImageConsumer for the image. It is initialized automatically for you by the getFilterInstance() method.

Constructor

public ImageFilter ()

The only constructor for ImageFilter is the default one, which takes no arguments. Subclasses can provide their own constructors if they need additional information.

ImageConsumer interface methods

public void setDimensions (int width, int height)

The setDimensions() method of ImageFilter is called when the width and height of the original image are known. It calls consumer.setDimensions() to tell the next consumer the dimensions of the filtered image. If you subclass ImageFilter and your filter changes the image's dimensions, you should override this method to compute and report the new dimensions.

public void setProperties (Hashtable properties)

The setProperties() method is called to provide the image filter with the property list for the original image. The image filter adds the property filters to the list and passes it along to the next consumer. The value given for the filters property is the result of the image filter's toString() method; that is, the String representation of the current filter. If filters is already set, information about this ImageFilter is appended to the end. Subclasses of ImageFilter may add other properties.

public void setColorModel (ColorModel model)

The setColorModel() method is called to give the ImageFilter the color model used for most of the pixels in the original image. It passes this color model on to the next consumer. Subclasses may override this method if they change the color model.

public void setHints (int hints)

The setHints() method is called to give the ImageFilter hints about how the producer will deliver pixels. This method passes the same set of hints to the next consumer. Subclasses must override this method if they need to provide different hints; for example, if they are delivering pixels in a different order.

public void setPixels (int x, int y, int width, int height, ColorModel model, byte pixels[], int offset, int scansize)
public void setPixels (int x, int y, int width, int height, ColorModel model, int pixels[], int offset, int scansize)

The setPixels() method receives pixel data from the ImageProducer and passes all the information on to the ImageConsumer. (x, y) is the top left corner of the bounding rectangle for the pixels. The bounding rectangle has size width x height. The ColorModel for the new image is model. pixels is the byte or integer array of the pixel information, starting at offset (usually 0), with scan lines of size scansize (usually width).

public void imageComplete (int status)

The imageComplete() method receives the completion status from the ImageProducer and passes it along to the ImageConsumer.

If you subclass ImageFilter, you will probably override the setPixels() methods. For simple filters, you may be able to modify the pixel array and deliver the result to consumer.setPixels() immediately. For more complex filters, you will have to build a buffer containing the entire image; in this case, the call to imageComplete() will probably trigger filtering and pixel delivery.

Cloneable interface methods

public Object clone ()

The clone() method creates a clone of the ImageFilter. The getFilterInstance() function uses this method to create a copy of the ImageFilter. Cloning allows the same filter instance to be used with multiple Image objects.

Other methods

public ImageFilter getFilterInstance (ImageConsumer ic)

FilteredImageSource calls getFilterInstance() to register ic as the ImageConsumer for an instance of this filter; to do so, it sets the instance variable consumer. In effect, this method inserts the ImageFilter between the image's producer and the consumer. You have to override this method only if there are special requirements for the insertion process. This default implementation just calls clone().

public void resendTopDownLeftRight (ImageProducer ip)

The resendTopDownLeftRight() method tells the ImageProducer ip to try to resend the image data in the top-down, left-to-right order. If you override this method and your ImageFilter has saved the image data internally, you may want your ImageFilter to resend the data itself, rather than asking the ImageProducer. Otherwise, your subclass may ignore the request or pass it along to the ImageProducer ip.

Subclassing ImageFilter: A blurring filter

When you subclass ImageFilter, there are very few restrictions on what you can do. We will create a few subclasses that show some of the possibilities. This ImageFilter generates a new pixel by averaging the pixels around it. The result is a blurred version of the original. To implement this filter, we have to save all the pixel data into a buffer; we can't start delivering pixels until the entire image is in hand. Therefore, we override setPixels() to build the buffer; we override imageComplete() to produce the new pixels and deliver them.

Before looking at the code, here are a few hints about how the filter works; it uses a few tricks that may be helpful in other situations. We need to provide two versions of setPixels(): one for integer arrays, and the other for byte arrays. To avoid duplicating code, both versions call a single method, setThePixels(), which takes an Object as an argument, instead of a pixel array; thus it can be called with either kind of pixel array. Within the method, we check whether the pixels argument is an instance of byte[] or int[]. The body of this method uses another trick: when it reads the byte[] version of the pixel array, it ANDs the value with 0xff. This prevents the byte value, which is signed, from being converted to a negative int when used as an argument to cm.getRGB().

The logic inside of imageComplete() gets a bit hairy. This method does the actual filtering, after all the data has arrived. Its job is basically simple: compute an average value of the pixel and the eight pixels surrounding it (i.e., a 3x3 rectangle with the current pixel in the center). The problem lies in taking care of the edge conditions. We don't always want to average nine pixels; in fact, we may want to average as few as four. The if statements figure out which surrounding pixels should be included in the average. The pixels we care about are placed in sumArray[], which has nine elements. We keep track of the number of elements that have been saved in the variable sumIndex and use a helper method, avgPixels(), to compute the average. The code might be a little cleaner if we used a Vector, which automatically counts the number of elements it contains, but it would probably be much slower.

Example 12.7 shows the code for the blurring filter.

Example 12.7: Blur Filter Source

import java.awt.*;
import java.awt.image.*;
public class BlurFilter extends ImageFilter {
    private int savedWidth, savedHeight, savedPixels[];
    private static ColorModel defaultCM = ColorModel.getRGBdefault();
    public void setDimensions (int width, int height) {
        savedWidth=width;
        savedHeight=height;
        savedPixels=new int [width*height];
        consumer.setDimensions (width, height);
    }

We override setDimensions() to save the original image's height and width, which we use later.

    public void setColorModel (ColorModel model) {
    // Change color model to model you are generating
        consumer.setColorModel (defaultCM);
    }
    public void setHints (int hintflags) {
    // Set new hints, but preserve SINGLEFRAME setting
        consumer.setHints (TOPDOWNLEFTRIGHT | COMPLETESCANLINES |
                           SINGLEPASS | (hintflags & SINGLEFRAME));
    }

This filter always generates pixels in the same order, so it sends the hint flags TOPDOWNLEFTRIGHT, COMPLETESCANLINES, and SINGLEPASS to the consumer, regardless of what the image producer says. It sends the SINGLEFRAME hint only if the producer has sent it.

    private void setThePixels (int x, int y, int width, int height,
            ColorModel cm, Object pixels, int offset, int scansize) {
        int sourceOffset = offset;
        int destinationOffset = y * savedWidth + x;
        boolean bytearray = (pixels instanceof byte[]);
        for (int yy=0;yy<height;yy++) {
            for (int xx=0;xx<width;xx++)
                if (bytearray)
                    savedPixels[destinationOffset++]=
                        cm.getRGB(((byte[])pixels)[sourceOffset++]&0xff);
                else
                    savedPixels[destinationOffset++]=
                        cm.getRGB(((int[])pixels)[sourceOffset++]);
            sourceOffset += (scansize - width);
            destinationOffset += (savedWidth - width);
        }
    }

setThePixels() saves the pixel data for the image in the array savedPixels[]. Both versions of setPixels() call this method. It doesn't pass the pixels along to the image consumer, since this filter can't process the pixels until the entire image is available.

public void setPixels (int x, int y, int width, int height,
       ColorModel cm, byte pixels[], int offset, int scansize) {
    setThePixels (x, y, width, height, cm, pixels, offset, scansize);
}
public void setPixels (int x, int y, int width, int height,
       ColorModel cm, int pixels[], int offset, int scansize) {
    setThePixels (x, y, width, height, cm, pixels, offset, scansize);
}
public void imageComplete (int status) {
    if ((status == IMAGEABORTED) || (status == IMAGEERROR)) {
        consumer.imageComplete (status);
        return;
    } else {
        int pixels[] = new int [savedWidth];
        int position, sumArray[], sumIndex;
        sumArray = new int [9]; // maxsize - vs. Vector for performance
        for (int yy=0;yy<savedHeight;yy++) {
            position=0;
            int start = yy * savedWidth;
            for (int xx=0;xx<savedWidth;xx++) {
                sumIndex=0;
                                                               //  xx     yy
                sumArray[sumIndex++] = savedPixels[start+xx];  // center center
                if (yy != (savedHeight-1))                     // center bottom
                    sumArray[sumIndex++] = savedPixels[start+xx+savedWidth];
                if (yy != 0)                                   // center top
                    sumArray[sumIndex++] = savedPixels[start+xx-savedWidth];
                if (xx != (savedWidth-1))                      // right  center
                    sumArray[sumIndex++] = savedPixels[start+xx+1];
                if (xx != 0)                                   // left   center
                    sumArray[sumIndex++] = savedPixels[start+xx-1];
                if ((yy != 0) && (xx != 0))                    // left   top
                    sumArray[sumIndex++] = savedPixels[start+xx-savedWidth-1];
                if ((yy != (savedHeight-1)) && (xx != (savedWidth-1)))
                    //                                           right  bottom
                    sumArray[sumIndex++] = savedPixels[start+xx+savedWidth+1];
                if ((yy != 0) && (xx != (savedWidth-1)))       //right  top
                    sumArray[sumIndex++] = savedPixels[start+xx-savedWidth+1];
                if ((yy != (savedHeight-1)) && (xx != 0))      //left   bottom
                    sumArray[sumIndex++] = savedPixels[start+xx+savedWidth-1];
                pixels[position++] = avgPixels(sumArray, sumIndex);
            }
            consumer.setPixels (0, yy, savedWidth, 1, defaultCM,
                                pixels, 0, savedWidth);
        }
        consumer.imageComplete (status);
    }
}

imageComplete() does the actual filtering after the pixels have been delivered and saved. If the producer reports that an error occurred, this method passes the error flags to the consumer and returns. If not, it builds a new array, pixels[], which contains the filtered pixels, and delivers these to the consumer.

Previously, we gave an overview of how the filtering process works. Here are some details. (xx, yy) represents the current point's x and y coordinates. The point (xx, yy) must always fall within the image; otherwise, our loops are constructed incorrectly. Therefore, we can copy (xx, yy) into the sumArray[] for averaging without any tests. For the point's eight neighbors, we check whether the neighbor falls in the image; if so, we add it to sumArray[]. For example, the point just below (xx, yy) is at the bottom center of the 3x3 rectangle of points we are averaging. We know that xx falls within the image; yy falls within the image if it doesn't equal savedHeight-1. We do similar tests for the other points.

Even though we're working with a rectangular image, our arrays are all one-dimensional so we have to convert a coordinate pair (xx, yy) into a single array index. To help us do the bookkeeping, we use the local variable start to keep track of the start of the current scan line. Then start + xx is the current point; start + xx + savedWidth is the point immediately below; start + xx + savedWidth-1 is the point below and to the left; and so on.

avgPixels() is our helper method for computing the average value that we assign to the new pixel. For each pixel in the pixels[] array, it extracts the red, blue, green, and alpha components; averages them separately, and returns a new ARGB value.

    private int avgPixels (int pixels[], int size) {
        float redSum=0, greenSum=0, blueSum=0, alphaSum=0;
        for (int i=0;i<size;i++)
            try {
                int pixel = pixels[i];
                redSum   += defaultCM.getRed   (pixel);
                greenSum += defaultCM.getGreen (pixel);
                blueSum  += defaultCM.getBlue  (pixel);
                alphaSum += defaultCM.getAlpha (pixel);
            } catch (ArrayIndexOutOfBoundsException e) {
                System.out.println ("Ooops");
            }
        int redAvg   = (int)(redSum   / size);
        int greenAvg = (int)(greenSum / size);
        int blueAvg  = (int)(blueSum  / size);
        int alphaAvg = (int)(alphaSum / size);
        return ((0xff << 24) | (redAvg << 16) |
                (greenAvg << 8)  | (blueAvg << 0));
    }
}
Producing many images from one: dynamic ImageFilter

The ImageFilter framework is flexible enough to allow you to return a sequence of images based on an original. You can send back one frame at a time, calling the following when you are finished with each frame:

consumer.imageComplete(ImageConsumer.SINGLEFRAMEDONE);

After you have generated all the frames, you can tell the consumer that the sequence is finished with the STATICIMAGEDONE constant. In fact, this is exactly what the new animation capabilities of MemoryImageSource use.

In Example 12.8, the DynamicFilter lets the consumer display an image. After the image has been displayed, the filter gradually overwrites the image with a specified color by sending additional image frames. The end result is a solid colored rectangle. Not too exciting, but it's easy to imagine interesting extensions: you could use this technique to implement a fade from one image into another. The key points to understand are:

Example 12.8: DynamicFilter Source

import java.awt.*;
import java.awt.image.*;
public class DynamicFilter extends ImageFilter {
    Color overlapColor;
    int   delay;
    int   imageWidth;
    int   imageHeight;
    int   iterations;
    DynamicFilter (int delay, int iterations, Color color) {
        this.delay      = delay;
        this.iterations = iterations;
        overlapColor    = color;
    }
    public void setDimensions (int width, int height) {
        imageWidth  = width;
        imageHeight = height;
        consumer.setDimensions (width, height);
    }
    public void setHints (int hints) {
        consumer.setHints (ImageConsumer.RANDOMPIXELORDER);
    }
    public void resendTopDownLeftRight (ImageProducer ip) {
    }
    public void imageComplete (int status) {
        if ((status == IMAGEERROR) || (status == IMAGEABORTED)) {
            consumer.imageComplete (status);
            return;
        } else {
            int xWidth = imageWidth / iterations;
            if (xWidth <= 0)
                xWidth = 1;
            int newPixels[] = new int [xWidth*imageHeight];
            int iColor = overlapColor.getRGB();
            for (int x=0;x<(xWidth*imageHeight);x++)
                newPixels[x] = iColor;
            int t=0;
            for (;t<(imageWidth-xWidth);t+=xWidth) {
                consumer.setPixels(t, 0, xWidth, imageHeight,
                        ColorModel.getRGBdefault(), newPixels, 0, xWidth);
                consumer.imageComplete (ImageConsumer.SINGLEFRAMEDONE);
                try {
                    Thread.sleep (delay);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
            }
            int left = imageWidth-t;
            if (left > 0) {
                consumer.setPixels(imageWidth-left, 0, left, imageHeight,
                        ColorModel.getRGBdefault(), newPixels, 0, xWidth);
                consumer.imageComplete (ImageConsumer.SINGLEFRAMEDONE);
            }
            consumer.imageComplete (STATICIMAGEDONE);
        }
    }
}

The DynamicFilter relies on the default setPixels() method to send the original image to the consumer. When the original image has been transferred, the image producer calls this filter's imageComplete() method, which does the real work. Instead of relaying the completion status to the consumer, imageComplete() starts generating its own data: solid rectangles that are all in the overlapColor specified in the constructor. It sends these rectangles to the consumer by calling consumer.setPixels(). After each rectangle, it calls consumer.imageComplete() with the SINGLEFRAMEDONE flag, meaning that it has just finished one frame of a multi-frame sequence. When the rectangles have completely covered the image, the method imageComplete() finally notifies the consumer that the entire image sequence has been transferred by sending the STATICIMAGEDONE flag.

The following code is a simple applet that uses this image filter to produce a new image:

import java.applet.*;
import java.awt.*;
import java.awt.image.*;
public class DynamicImages extends Applet {
    Image i, j;
    public void init () {
        i = getImage (getDocumentBase(), "rosey.jpg");
        j = createImage (new FilteredImageSource (i.getSource(),
                        new DynamicFilter(250, 10, Color.red)));
    }
    public void paint (Graphics g) {
        g.drawImage (j, 10, 10, this);
    }
}

One final curiosity: the DynamicFilter doesn't make any assumptions about the color model used for the original image. It sends its overlays with the default RGB color model. Therefore, this is one case in which an ImageConsumer may see calls to setPixels() that use different color models.

RGBImageFilter

RGBImageFilter is an abstract subclass of ImageFilter that provides a shortcut for building the most common kind of image filters: filters that independently modify the pixels of an existing image, based only on the pixel's position and color. Because RGBImageFilter is an abstract class, you must subclass it before you can do anything. The only method your subclass must provide is filterRGB(), which produces a new pixel value based on the original pixel and its location. A handful of additional methods are in this class; most of them provide the behind-the-scenes framework for funneling each pixel through the filterRGB() method.

If the filtering algorithm you are using does not rely on pixel position (i.e., the new pixel is based only on the old pixel's color), AWT can apply an optimization for images that use an IndexColorModel: rather than filtering individual pixels, it can filter the image's color map. In order to tell AWT that this optimization is okay, add a constructor to the class definition that sets the canFilterIndexColorModel variable to true. If canFilterIndexColorModel is false (the default) and an IndexColorModel image is sent through the filter, nothing happens to the image. Variables

protected boolean canFilterIndexColorModel

Setting the canFilterIndexColorModel variable permits the ImageFilter to filter IndexColorModel images. The default value is false. When this variable is false, IndexColorModel images are not filtered. When this variable is true, the ImageFilter filters the colormap instead of the individual pixel values.

protected ColorModel newmodel

The newmodel variable is used to store the new ColorModel when canFilterIndexColorModel is true and the ColorModel actually is of type IndexColorModel. Normally, you do not need to access this variable, even in subclasses.

protected ColorModel origmodel

The origmodel variable stores the original color model when filtering an IndexColorModel. Normally, you do not need to access this variable, even in subclasses.

Constructors

public RGBImageFilter ()--called by subclass

The only constructor for RGBImageFilter is the implied constructor with no parameters. In most subclasses of RGBImageFilter, the constructor has to initialize only the canFilterIndexColorModel variable.

ImageConsumer interface methods

public void setColorModel (ColorModel model)

The setColorModel() method changes the ColorModel of the filter to model. If canFilterIndexColorModel is true and model is of type IndexColorModel, a filtered version of model is used instead.

public void setPixels (int x, int y, int w, int h, ColorModel model, byte pixels[], int off, int scansize)
public void setPixels (int x, int y, int w, int h, ColorModel model, int pixels[], int off, int scansize)

If necessary, the setPixels() method converts the pixels buffer to the default RGB ColorModel and then filters them with filterRGBPixels(). If model has already been converted, this method just passes the pixels along to the consumer's setPixels().

Other methods

The only method you care about here is filterRGB(). All subclasses of RGBImageFilter must override this method. It is very difficult to imagine situations in which you would override (or even call) the other methods in this group. They are helper methods that funnel pixels through filterRGB().

public void substituteColorModel (ColorModel oldModel, ColorModel newModel)

substituteColorModel() is a helper method for setColorModel(). It initializes the protected variables of RGBImageFilter. The origmodel variable is set to oldModel and the newmodel variable is set to newModel.

public IndexColorModel filterIndexColorModel (IndexColorModel icm)

filterIndexColorModel() is another helper method for setColorModel(). It runs the entire color table of icm through filterRGB() and returns the filtered ColorModel for use by setColorModel().

public void filterRGBPixels (int x, int y, int width, int height, int pixels[], int off, int scansize)

filterRGBPixels() is a helper method for setPixels(). It filters each element of the pixels buffer through filterRGB(), converting pixels to the default RGB ColorModel first. This method changes the values in the pixels array.

public abstract int filterRGB (int x, int y, int rgb)

filterRGB() is the one method that RGBImageFilter subclasses must implement. The method takes the rgb pixel value at position (x, y) and returns the converted pixel value in the default RGB ColorModel. Coordinates of (-1, -1) signify that a color table entry is being filtered instead of a pixel.

A transparent image filter that extends RGBImageFilter

Creating your own RGBImageFilter is fairly easy. One of the more common applications for an RGBImageFilter is to make images transparent by setting the alpha component of each pixel. To do so, we extend the abstract RGBImageFilter class. The filter in Example 12.9 makes the entire image translucent, based on a percentage passed to the class constructor. Filtering is independent of position, so the constructor can set the canFilterIndexColorModel variable. A constructor with no arguments uses a default alpha value of 0.75.

Example 12.9: TransparentImageFilter Source

import java.awt.image.*;
class TransparentImageFilter extends RGBImageFilter {
    float alphaPercent;
    public TransparentImageFilter () {
        this (0.75f);
    }
    public TransparentImageFilter (float aPercent)
            throws IllegalArgumentException {
        if ((aPercent < 0.0) || (aPercent > 1.0))
            throw new IllegalArgumentException();
        alphaPercent = aPercent;
        canFilterIndexColorModel = true;
    }
    public int filterRGB (int x, int y, int rgb) {
        int a = (rgb >> 24) & 0xff;
        a *= alphaPercent;
        return ((rgb & 0x00ffffff) | (a << 24));
    }
}

CropImageFilter

The CropImageFilter is an ImageFilter that crops an image to a rectangular region. When used with FilteredImageSource, it produces a new image that consists of a portion of the original image. The cropped region must be completely within the original image. It is never necessary to subclass this class. Also, using the 10 or 11 argument version of Graphics.drawImage() introduced in Java 1.1 precludes the need to use this filter, unless you need to save the resulting cropped image.

If you crop an image and then send the result through a second ImageFilter, the pixel array received by the filter will be the size of the original Image, with the offset and scansize set accordingly. The width and height are set to the cropped values; the result is a smaller Image with the same amount of data. CropImageFilter keeps the full pixel array around, partially empty. Constructors

public CropImageFilter (int x, int y, int width, int height) (New)

The constructor for CropImageFilter specifies the rectangular area of the old image that makes up the new image. The (x, y) coordinates specify the top left corner for the cropped image; width and height must be positive or the resulting image will be empty. If the (x, y) coordinates are outside the original image area, the resulting image is empty. If (x, y) starts within the image but the rectangular area of size width x height goes beyond the original image, the part that extends outside will be black. (Remember the color black has pixel values of 0 for red, green, and blue.)

ImageConsumer interface methods

public void setProperties (Hashtable properties) (New)

The setProperties() method adds the croprect image property to the properties list. The bounding Rectangle, specified by the (x, y) coordinates and width x height size, is associated with this property. After updating properties, this method sets the properties list of the consumer.

public void setDimensions (int width, int height) (New)

The setDimensions() method of CropImageFilter ignores the width and height parameters to the function call. Instead, it relies on the size parameters in the constructor.

public void setPixels (int x, int y, int w, int h, ColorModel model, byte pixels[], int offset, int scansize) (New)
public void setPixels (int x, int y, int w, int h, ColorModel model, int pixels[], int offset, int scansize) (New)

These setPixels() methods check to see what portion of the pixels array falls within the cropped area and pass those pixels along.

Cropping an image with CropImageFilter

Example 12.10 uses a CropImageFilter to extract the center third of a larger image. No subclassing is needed; the CropImageFilter is complete in itself. The output is displayed in Figure 12.7.

Example 12.10: Crop Applet Source

import java.applet.*;
import java.awt.*;
import java.awt.image.*;
public class Crop extends Applet {
    Image i, j;
    public void init () {
        MediaTracker mt = new MediaTracker (this);
        i = getImage (getDocumentBase(), "rosey.jpg");
        mt.addImage (i, 0);
        try {
            mt.waitForAll();
            int width        = i.getWidth(this);
            int height       = i. getHeight(this);
            j = createImage (new FilteredImageSource (i.getSource(),
                                new CropImageFilter (width/3, height/3,
                                                     width/3, height/3)));
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }
    public void paint (Graphics g) {
        g.drawImage (i, 10, 10, this);                  // regular
        if (j != null) {
            g.drawImage (j, 10, 90, this);             // cropped
        }
    }
}

TIP:

You can use CropImageFilter to help improve your animation performance or just the general download time of images. Without CropImageFilter, you can use Graphics.clipRect() to clip each image of an image strip when drawing. Instead of clipping each Image (each time), you can use CropImageFilter to create a new Image for each cell of the strip. Or for times when an image strip is inappropriate, you can put all your images within one image file (in any order whatsoever), and use CropImageFilter to get each out as an Image .

ReplicateScaleFilter

Back in Chapter 2, Simple Graphics we introduced you to the getScaledInstance() method. This method uses a new image filter that is provided with Java 1.1. The ReplicateScaleFilter and its subclass, AreaAveragingScaleFilter, allow you to scale images before calling drawImage(). This can greatly speed your programs because you don't have to wait for the call to drawImage() before performing scaling.

The ReplicateScaleFilter is an ImageFilter that scales by duplicating or removing rows and columns. When used with FilteredImageSource, it produces a new image that is a scaled version of the original. As you can guess, ReplicateScaleFilter is very fast, but the results aren't particularly pleasing aesthetically. It is great if you want to magnify a checkerboard but not that useful if you want to scale an image of your Aunt Polly. Its subclass, AreaAveragingScaleFilter, implements a more time-consuming algorithm that is more suitable when image quality is a concern. Constructor

public ReplicateScaleFilter (int width, int height)

The constructor for ReplicateScaleFilter specifies the size of the resulting image. If either parameter is -1, the resulting image maintains the same aspect ratio as the original image.

ImageConsumer interface methods

public void setProperties (Hashtable properties)

The setProperties() method adds the rescale image property to the properties list. The value of the rescale property is a quoted string showing the image's new width and height, in the form `<width>x<height>`, where the width and height are taken from the constructor. After updating properties, this method sets the properties list of the consumer.

public void setDimensions (int width, int height)

The setDimensions() method of ReplicateScaleFilter passes the new width and height from the constructor along to the consumer. If either of the constructor's parameters are negative, the size is recalculated proportionally. If both are negative, the size becomes width x height.

public void setPixels (int x, int y, int w, int h, ColorModel model, int pixels[], int offset, int scansize)
public void setPixels (int x, int y, int w, int h, ColorModel model, byte pixels[], int offset, int scansize)

The setPixels() method of ReplicateScaleFilter checks to see which rows and columns of pixels to pass along.

AreaAveragingScaleFilter

The AreaAveragingScaleFilter subclasses ReplicateScaleFilter to provide a better scaling algorithm. Instead of just dropping or adding rows and columns, AreaAveragingScaleFilter tries to blend pixel values when creating new rows and columns. The filter works by replicating rows and columns to generate an image that is a multiple of the original size. Then the image is resized back down by an algorithm that blends the pixels around each destination pixel. AreaAveragingScaleFilter methods

Because this filter subclasses ReplicateScaleFilter, the only methods it includes are those that override methods of ReplicateScaleFilter. Constructors

public AreaAveragingScaleFilter (int width, int height) (New)

The constructor for AreaAveragingScaleFilter specifies the size of the resulting image. If either parameter is -1, the resulting image maintains the same aspect ratio as the original image.

ImageConsumer interface methods

public void setHints (int hints) (New)

The setHints() method of AreaAveragingScaleFilter checks to see if some optimizations can be performed based upon the value of the hints parameter. If they can't, the image filter has to cache the pixel data until it receives the entire image.

public void setPixels (int x, int y, int w, int h, ColorModel model, byte pixels[], int offset, int scansize) (New)
public void setPixels (int x, int y, int w, int h, ColorModel model, int pixels[], int offset, int scansize) (New)

The setPixels() method of AreaAveragingScaleFilter accumulates the pixels or passes them along based upon the available hints. If setPixels() accumulates the pixels, this filter passes them along to the consumer when appropriate.

Cascading Filters

It is often a good idea to perform complex filtering operations by using several filters in a chain. This technique requires the system to perform several passes through the image array, so it may be slower than using a single complex filter; however, cascading filters yield code that is easier to understand and quicker to write--particularly if you already have a collection of image filters from other projects.

For example, assume you want to make a color image transparent and then render the image in black and white. The easy way to do this task is to apply a filter that converts color to a gray value and then apply the TransparentImageFilter we developed in Example 12.9. Using this strategy, we have to develop only one very simple filter. Example 12.11 shows the source for the GrayImageFilter; Example 12.12 shows the applet that applies the two filters in a daisy chain.

Example 12.11: GrayImageFilter Source

import java.awt.image.*;
public class GrayImageFilter extends RGBImageFilter {
    public GrayImageFilter () {
        canFilterIndexColorModel = true;
    }
    public int filterRGB (int x, int y, int rgb) {
        int gray  = (((rgb & 0xff0000) >> 16) +
                        ((rgb & 0x00ff00) >> 8) +
                        (rgb & 0x0000ff)) / 3;
        return (0xff000000 | (gray << 16) | (gray <<  8) |  gray);
    }
}

Example 12.12: DrawingImages Source

import java.applet.*;
import java.awt.*;
import java.awt.image.*;
public class DrawingImages extends Applet {
    Image i, j, k, l;
    public void init () {
        i = getImage (getDocumentBase(), "rosey.jpg");
        GrayImageFilter gif = new GrayImageFilter ();
        j = createImage (new FilteredImageSource (i.getSource(), gif));
        TransparentImageFilter tf = new TransparentImageFilter (.5f);
        k = createImage (new FilteredImageSource (j.getSource(), tf));
        l = createImage (new FilteredImageSource (i.getSource(), tf));
    }
    public void paint (Graphics g) {
        g.drawImage (i, 10, 10, this);                  // regular
        g.drawImage (j, 270, 10, this);                 // gray
        g.drawImage (k, 10, 110, Color.red, this);      // gray - transparent
        g.drawImage (l, 270, 110, Color.red, this);     // transparent
    }
}

Granted, neither the GrayImageFilter or the TransparentImageFilter are very complex, but consider the savings you would get if you wanted to blur an image, crop it, and then render the result in grayscale. Writing a filter that does all three is not a task for the faint of heart; remember, you can't subclass RGBImageFilter or CropImageFilter because the result does not depend purely on each pixel's color and position. However, you can solve the problem easily by cascading the filters developed in this chapter.


Previous Home Next
ImageConsumer Book Index AWT Exceptions and Errors

Java in a Nutshell Java Language Reference Java AWT Java Fundamental Classes Exploring Java