This document gives you more information about the stream encoding and stream decoding method.
This tutorial explains how to encode and decode an image by passing pixel data block by block.
Required Background
The image is decoded / encoded
using imageconversion.dll
and currently only
JPEG codec is supported for the streaming block method, so jpegcodec.dll
is used from the Imaging Plugins component.
Introduction
An image is compressed into an image frame. This is decoded / encoded in one go which consumes more memory usage.
The Symbian JPEG
codec now supports
enhanced functionality during the encode / decode operation using
Stream Encoding and Stream Decoding methods. In these methods an image
frame which is part of a compressed image can be divided into sub
blocks and these are encoded / decoded block by block of YUV pixel
data.
For decoder MImageConvStreamedDecode
and TImageConvStreamedDecode
are used to adapt
the streaming functionality. And for encoder MImageConvStreamedEncode
and TImageConvStreamedEncode
are used to adapt
the streaming functionality.
Note:- Only the Symbian JPEG codec supports decoding / encoding of an image using Stream Encoding and Stream Decoding methods which consumes less memory usage. No other Symbian codecs are modified to provide this support.
The Stream Encoding And Stream Decoding methods also supports cropping or scaling an image in sequence order or random order.
Setup and Configuration Requirements
For the encoder / the decoder to perform streaming you need to set up the navigation mode by using the streaming capabilities.
The streaming capabilites
for decoding are supported by TDecodeStreamCaps
. For example you can obtain the optimum number of blocks through
streaming in a single request to get maximum performance by using
the parameter aOptimalBlocksPerRequest
.
The streaming
capabilities for encoding are supported by the Image Processor Adaptation
Plug-in encoder. For example you can obtain the maximum number
of blocks through streaming by using the parameter aMaxBlocksPerRequest
.
During the decode operation, the blocks or sub-frames can be navigated in the following order :
The sub-frames or blocks can be passed sequentially from top left of the image, left to right and top to bottom.
The blocks can be passed in random order to access each block.
During the encode operation,the blocks or sub-frames can be navigated :
The Following tasks are covered in this tutorial:
Basic Procedure For Stream Encoding And Stream Decoding
The high level steps to perform streaming block during encode and decode operation are as follows:
To create the
encoder call CImageEncoder::FileNewL()
or CImageEncoder::DataNewL()
and to create the decoder call CImageDecoder::FileNewL()
or CImageDecoder::DataNewL()
.
For the encoder
streaming, requests a streaming interface through CImageEncoder::BlockStreamerL()
and for the decoder streaming, request an interface through CImageDecoder::BlockStreamerL()
.
After requesting
the streaming interface, if the streaming extension is supported then
a T class pointer is returned which gives access to the JPEG codec
extension. TImageConvStreamedDecode
gives the extension
functionality for stream decoding and TImageConvStreamedEncode
gives the extension functionality for stream encoding.
To set the navigation mode for the encode streaming call TEncodeStreamCaps::TNavigation()
and for the decode streaming
call TDecodeStreamCaps::TNavigation()
.
For decode streaming, the navigation possibilities are :
The blocks are returned from first to last.
The blocks are returned from last to first.
The blocks are returned randomly e.g. 18, 5, 20.
The blocks are returned in a random order but moving only from first to last e.g. 1, 5, 18.
The blocks are returned in a random order but moving only from last to first e.g. 18, 5, 1.
The navigation are shown below:
enum TNavigation { ENavigationSequentialForward = 0x01, // Sequential order from first to last ENavigationSequentialBackwards = 0x10, // Sequential order from last to first ENavigationRandom = 0x08, // random order ENavigationRandomForward = 0x02, // random order frist to last ENavigationRandomBackwards = 0x04, // random order last to first }
For encode streaming, the navigation possibilities are:
The blocks are returned from first to last.
The blocks are returned in a random order but moving only from first to last e.g. 1, 5, 18.
The blocks are returned in a random order but moving only from last to first e.g. 1, 5, 18.
enum TNavigation { ENavigationSequentialForward = 0x01, // sequential order from first to last ENavigationRandomForward = 0x02, // random order from first to last EnavigationRandomBackwards = 0x04, // random order from last to first };
To initialize
the stream decoder use TImageConvStreamedDecode::InitFrameL()
and use its parameter.
To initialize
the encode streaming use TImageConvStreamedEncode::InitFrameL()
and use its parameter.
During decode
function, the memory for storing CImageFrame
must
be large enough to contain the decoded frame. To obtain the buffer
size for a particular decode function call TImageConvStreamedDecode::GetBufferSize()
.
The GetBufferSize()
function returns:
To store the
image data in any format or layout which is described by a format
code UID, create an empty image frame using CImageFrame
.
To set the image
frame size in pixels call CImageFrame::SetFrameSizeInPixels()
. The parameter aFrameSize
is used to returned aBlockSizeInPixels
from GetBufferSize
.
In decode streaming,
in order to start asynchronous call to return blocks use MImageConvStreamedDecode::GetNextBlocks()
.
In encode streaming,
in order to start asynchronous call to append blocks use MImageConvStreamedEncode::AppendBlocks()
.
Note: The memory optimization is mainly achieved
by GetNextBlocks
and AppendBlocks
applying effect to the image frame block. And the streaming is only
supported by the images which are multiples of Minimum Coded Unit
(MCU).
Example
The example below shows how to use stream encoding and stream decoding methods:
void CIclExample::StreamDecodeAndEncodeYuvFrameL(const TDesC& aSrcFileName, const TDesC& aDestFileName) { const TInt KFrameNumber = 0; // first frame const TUid KFormat = KUidFormatYUV422Interleaved; // 422 sampling scheme const TInt KNumBlocksToGet = 1; RChunk chunk; TSize streamBlockSizeInPixels; TEncodeStreamCaps caps; TInt numBlocksRead = 0; TBool haveMoreBlocks = ETrue; // Create the decoder, passing the filename. The image is recognised by the // Image Conversion Library, an appropriate codec plugin loaded and the image headers parsed. // If the image is not recognised or valid then the call will leave with an error CImageDecoder* jpegImageDecoder = static_cast<CJPEGImageFrameDecoder*>( CImageDecoder::FileNewL(iFs, aSrcFileName)); CleanupStack::PushL(jpegImageDecoder); // Create the encoder, passing the filename. The image is recognised by the // Image Conversion Library, an appropriate codec plugin loaded and the image headers parsed. // If the image is not recognised or valid then the call will leave with an error CImageEncoder* jpegImageEncoder = static_cast<CJPEGImageFrameEncoder*>( CImageEncoder::FileNewL(iFs, aDestFileName, CImageEncoder::EOptionNone, KImageTypeJPGUid)); CleanupStack::PushL(jpegImageEncoder); // Create encode & decode Block Streamer TImageConvStreamedDecode* streamDecode = jpegImageDecoder->BlockStreamerL(); TImageConvStreamedEncode* streamEncode = jpegImageEncoder->BlockStreamerL(); TFrameInfo frameInfo = jpegImageDecoder->FrameInfo(); TSize frameSizeInPixels(frameInfo.iOverallSizeInPixels); //NOTE: The image used for decoding should be multiple of MCU(Minimum coded unit) //set the navigation mode initialize decoder frame TDecodeStreamCaps::TNavigation decodeNavigation = TDecodeStreamCaps::ENavigationSequentialForward; streamDecode->InitFrameL(KFormat, KFrameNumber, decodeNavigation); streamEncode->GetCapabilities(KFormat,caps); TSize blockSizeInPixels = TSize(caps.MinBlockSizeInPixels()); //initialize encoder frame TEncodeStreamCaps::TNavigation encodeNavigation = TEncodeStreamCaps::ENavigationSequentialForward; streamEncode->InitFrameL(KFormat, KFrameNumber, frameSizeInPixels, blockSizeInPixels, encodeNavigation, NULL); //When decoding, the buffer wrapped by the destination CImageFrame must be large enough to contain the decoded frame. //GetBufferSize() should be used to obtain the buffer size required for a particular decode TInt imageSizeInBytes = streamDecode->GetBufferSize(KFormat, streamBlockSizeInPixels, KNumBlocksToGet); User::LeaveIfError(chunk.CreateGlobal(KRChunk, imageSizeInBytes, imageSizeInBytes, EOwnerProcess)); CleanupClosePushL(chunk); // Create an empty imageframe CImageFrame* imageFrame = CImageFrame::NewL(&chunk, imageSizeInBytes, 0); CleanupStack::PushL(imageFrame); imageFrame->SetFrameSizeInPixels(streamBlockSizeInPixels); while(haveMoreBlocks) { // See Note 1 CActiveListener* activeListener = CreateAndInitializeActiveListenerLC(); //decoder get blocks streamDecode->GetNextBlocks(activeListener->iStatus, *imageFrame, KNumBlocksToGet, numBlocksRead, haveMoreBlocks); // See Note 2 CActiveScheduler::Start(); User::LeaveIfError(activeListener->iStatus.Int()); // decode complete. //NOTE: Apply effects like adjust brightness etc in low memory conditions by use of streaming to the image frame block // See Note 1 activeListener->InitializeActiveListener(); //encoder append blocks streamEncode->AppendBlocks(activeListener->iStatus, *imageFrame, numBlocksRead); // See Note 2 CActiveScheduler::Start(); User::LeaveIfError(activeListener->iStatus.Int()); // encode complete. CleanupStack::PopAndDestroy(activeListener); // encodeActiveListener } CleanupStack::PopAndDestroy(4); // imageFrame, chunk, jpegImageEncoder and jpegImageDecoder }