Image Processing

The SDK provides abundant image processing APIs for displaying image, saving image, converting pixel format, reforming image, recording, etc. After images are acquired, they can be transmitted into these APIs for image processing.

This chapter includes:

Render Image

After acquiring image, you can refer to the following sample code, and call MV_CC_DisplayOneFrameEx2() to display the image on the specified window.
// 1. Get image data
MV_CC_IMAGE stImage = {0};
stImage.nWidth = stImageInfo.stFrameInfo.nExtendWidth;
stImage.nHeight = stImageInfo.stFrameInfo.nExtendHeight;
stImage.enPixelType = stImageInfo.stFrameInfo.enPixelType;
stImage.pImageBuf = stImageInfo.pBufAddr;
stImage.nImageBufLen = stImageInfo.stFrameInfo.nFrameLenEx;
// 2. Configure rendering mode
unsigned int enRenderMode = 0;
// 3. Display image
nRet = MV_CC_DisplayOneFrameEx2(handle,(void*)g_hwnd, &stImage , enRenderMode);
Check(nRet);
  1. Get width, height, pixel format, buffer and length of the image.
  2. Configure image rendering mode. Set the mode to 0 (default mode), indicating OpenGL mode.
  3. Render acquired image through specified mode via calling MV_CC_DisplayOneFrameEx2().


Save Image

You can save the image to the local or the memory.
The following sample code shows how to saves images as local BMP files.
// Save raw data as bmp image
char chImageName[IMAGE_NAME_LEN] = { 0 };
MV_CC_IMAGE stImage;
memset(&stImage, 0, sizeof(MV_CC_IMAGE));
MV_CC_SAVE_IMAGE_PARAM stSaveImageParam;
memset(&stSaveImageParam, 0, sizeof(MV_CC_SAVE_IMAGE_PARAM));
stImage.enPixelType = stImageInfo.stFrameInfo.enPixelType;
stImage.nHeight = stImageInfo.stFrameInfo.nExtendWidth;
stImage.nWidth = stImageInfo.stFrameInfo.nExtendHeight;
stImage.nImageBufLen = stImageInfo.stFrameInfo.nFrameLenEx;
stImage.pImageBuf = stImageInfo.pBufAddr;
stSaveImageParam.enImageType = MV_Image_Bmp;
stSaveImageParam.iMethodValue = 1;
stSaveImageParam.nQuality = 99;
sprintf_s(chImageName, IMAGE_NAME_LEN, "InPut_w%d_h%d_fn%03d.bmp", stImage.nWidth, stImage.nHeight, stImageInfo.stFrameInfo.nFrameNum);
nRet = MV_CC_SaveImageToFileEx2(handle, &stImage, &stSaveImageParam, chImageName);
Check(nRet);


Convert Pixel Format

Call MV_CC_ConvertPixelTypeEx() to convert image format into another pixel format.
For instance, you can transform Bayer to RGB/BGR. The interpolation to images in Bayer format supports several algorithms such as smoothing filter, Gamma correction, and CCM correction. Refer to details in Image Processing.
The following sample code shows how to convert images to RGB Format.
// Set interpolation method to equalization.
nRet = MV_CC_SetBayerCvtQuality(handle, 1);
Check(nRet);
unsigned char *pConvertData = NULL;
unsigned int nConvertDataSize = 0;
nConvertDataSize = stOutFrame.stFrameInfo.nExtendWidth * stOutFrame.stFrameInfo.nExtendHeight* 3;
pConvertData = (unsigned char*)malloc(nConvertDataSize);
if (NULL == pDataForRGB)
{
printf("pConvertData is null\n");
break;
}
// Pixel format conversion
MV_CC_PIXEL_CONVERT_PARAM_EX stConvertParam = {0};
stConvertParam.nWidth = stOutFrame.stFrameInfo.nExtendWidth;
stConvertParam.nHeight = stOutFrame.stFrameInfo.nExtendHeight;
stConvertParam.pSrcData = stOutFrame.pBufAddr;
stConvertParam.nSrcDataLen = stOutFrame.stFrameInfo.nFrameLenEx;
stConvertParam.enSrcPixelType = stOutFrame.stFrameInfo.enPixelType;
stConvertParam.pDstBuffer = pConvertData ;
stConvertParam.nDstBufferSize = nConvertDataSize ;
nRet = MV_CC_ConvertPixelTypeEx(handle, &stConvertParam);
Check(nRet);


Decode Image

Some cameras support encoding and compressing image. You can decode image through calling MV_CC_HB_Decode().
Attention
Check if the input image is complete before decoding. If some data packet of image is lost, the API will return the failure message. Check with nLostPacket (the number of packets lost in the current frame) in MV_FRAME_OUT_INFO_EX . If the value of nLostPacket is greater than 0, there is packet loss.
The following sample code shows how to decode compressed image.
unsigned char* pDstBuf = NULL;
MV_FRAME_OUT stImageInfo = {0};
MV_CC_HB_DECODE_PARAM stDecodeParam = {0};
// Check packet loss
if (0 == stImageInfo.stFrameInfo.nLostPacket)
{
// Lossless compression
stDecodeParam.pSrcBuf = stImageInfo.pBufAddr;
stDecodeParam.nSrcLen = stImageInfo.stFrameInfo.nFrameLen;
if (NULL == pDstBuf)
{
pDstBuf = (unsigned char *)malloc(sizeof(unsigned char) * (nPayloadSize));
if (NULL == pDstBuf)
{
printf("malloc pDsrData fail!\n");
break;
}
}
stDecodeParam.pDstBuf = pDstBuf;
stDecodeParam.nDstBufSize = nPayloadSize;
nRet = MV_CC_HB_Decode(handle, &stDecodeParam);
Check(nRet);
}
else
{
printf("Frame [%d] lost packet [%d] \n",stImageInfo.stFrameInfo.nFrameNum,stImageInfo.stFrameInfo.nLostPacket)
}


Recording

You can save the image received by the camera as recording file via SDK. The steps are as follows:
  1. Call MV_CC_StartRecord() to start recording. You need to configure parameters required for recording.
  2. Call MV_CC_InputOneFrame() to input image data repeatedly.
  3. Call MV_CC_StopRecord() to stop recording when you finish.
The following sample code shows how to save image as recording file.
  1. Start recording: Call MV_CC_StartRecord() to configure parameters that can be acquired via camera or image. The following sample code shows how to get parameters including width, height and pixel format from camera.

    MV_CC_RECORD_PARAM stRecordPar;
    MVCC_INTVALUE stParam = {0};
    nRet = MV_CC_GetIntValue(handle, "Width", &stParam);
    Check(nRet);
    stRecordPar.nWidth = stParam.nCurValue;
    nRet = MV_CC_GetIntValue(handle, "Height", &stParam);
    Check(nRet);
    stRecordPar.nHeight = stParam.nCurValue;
    MVCC_ENUMVALUE stEnumValue = {0};
    nRet = MV_CC_GetEnumValue(handle, "PixelFormat", &stEnumValue);
    Check(nRet);
    stRecordPar.enPixelType = MvGvspPixelType(stEnumValue.nCurValue);
    MVCC_FLOATVALUE stFloatValue;
    nRet = MV_CC_GetFloatValue(handle, "ResultingFrameRate", &stFloatValue);
    Check(nRet);
    stRecordPar.fFrameRate = stFloatValue.fCurValue;
    stRecordPar.nBitRate = 1000;
    stRecordPar.strFilePath= "./Recording.avi";
    nRet = MV_CC_StartRecord(handle, &stRecordPar);
    Check(nRet);
  2. Input camera to recording: Call MV_CC_InputOneFrame() repeatedly to input image data from camera.

    static unsigned int __stdcall WorkThread(void* handle)
    {
    int nRet = MV_OK;
    MV_FRAME_OUT stImageInfo = {0};
    MV_CC_INPUT_FRAME_INFO stInputFrameInfo = {0};
    while(1)
    {
    nRet = MV_CC_GetImageBuffer(handle, &stImageInfo, 1000);
    if (nRet == MV_OK)
    {
    printf("Get Image Buffer: Width[%d], Height[%d], FrameNum[%d]\n",
    stImageInfo.stFrameInfo.nWidth, stImageInfo.stFrameInfo.nHeight, stImageInfo.stFrameInfo.nFrameNum);
    stInputFrameInfo.pData = stImageInfo.pBufAddr;
    stInputFrameInfo.nDataLen = stImageInfo.stFrameInfo.nFrameLen;
    nRet = MV_CC_InputOneFrame(handle, &stInputFrameInfo);
    Check(nRet);
    nRet = MV_CC_FreeImageBuffer(handle, &stImageInfo);
    Check(nRet);
    }
    if(g_bExit)
    {
    break;
    }
    }
    return 0;
    }
  3. Stop recording: Call MV_CC_StopRecord() to stop recording.

    nRet = MV_CC_StopRecord(handle);
    Check(nRet);


Divide Image via Multi-Light Control Feature

  • Divide Image via SDK
    Some cameras support multi-light control feature, and can send the entire multi-light control image to SDK. By calling MV_CC_ReconstructImage(), the entire image can be split and each image under exposure can be outputted.
    The basic steps are as follows:
    1. Call MV_CC_GetEnumValue() to get the value of node “MultiLightControl”, namely the number of exposure of the current camera.
      Attention
      • Some cameras may lack the node “MultiLightControl”, so you need to configure it according to the real number of exposure.
      • In the HB (high bandwidth) mode, the value of node “MultiLightControl” needs to be converted for effective exposure number.
    2. If image acquired is HB image, call MV_CC_HB_Decode() for HB decoding.
    3. Call MV_CC_ReconstructImage() to reconstruct image.
    The sample code is as follows:
    unsigned int m_nMultiLightNum = 0; //Get multiple groups of exposure number
    MVCC_ENUMVALUE stEnumValue = {0};
    int nRet = MV_CC_GetEnumValue(handle, "MultiLightControl", &stEnumValue);
    Check(nRet);
    m_nMultiLightNum=stEnumValue.nCurValue;
    nRet = MV_CC_GetEnumValue(handle, "ImageCompressionMode", &stEnumValue);
    Check(nRet);
    if (2 == stEnumValue.nCurValue) //Turn on HB mode
    {
    /*Conversion logic of exposure number:
    Currently, the number of lights defined in the firmware HB starts from 0x10, and the actual number of lights is acquired from the lower four bits of the byte.*/
    m_nMultiLightNum = m_nMultiLightNum & 0xF;
    /*In the HB mode, the HB images need to be decoded. Refer to the chapter “Decode Image” before reconstructing image.*/
    }
    MV_RECONSTRUCT_IMAGE_PARAM stImgReconstructionParam ;
    unsigned char* pImageBufferList[8] = {0};
    for (int k = 0; k < 8; k++)
    {
    pImageBufferList[k] = NULL;
    }
    stImgReconstructionParam.nWidth = stOutFrame.stFrameInfo.nWidth;
    stImgReconstructionParam.nHeight = stOutFrame.stFrameInfo.nHeight;
    stImgReconstructionParam.enPixelType = stOutFrame.stFrameInfo.enPixelType;
    stImgReconstructionParam.pSrcData = stOutFrame.pBufAddr;
    stImgReconstructionParam.nSrcDataLen = stOutFrame.stFrameInfo.nFrameLen;
    /*Number of exposure*/
    stImgReconstructionParam.nExposureNum = m_nMultiLightNum;
    stImgReconstructionParam.enReconstructMethod = MV_SPLIT_BY_LINE;
    /*Length after reconstructing*/
    unsigned int nImageBufferSize = stImgReconstructionParam.nSrcDataLen / m_nMultiLightNum;;
    for (unsigned int i = 0; i < m_nMultiLightNum; i++)
    {
    if (pImageBufferList[i])
    {
    free(pImageBufferList[i]);
    pImageBufferList[i] = NULL;
    }
    pImageBufferList[i] = (unsigned char*)malloc(nImageBufferSize);
    if (NULL != pImageBufferList[i])
    {
    stImgReconstructionParam.stDstBufList[i].pBuf = pImageBufferList[i];
    stImgReconstructionParam.stDstBufList[i].nBufSize = nImageBufferSize;
    }
    else
    {
    return MV_E_RESOURCE;
    }
    }
    nRet = MV_CC_ReconstructImage(handle, &stImgReconstructionParam);
    Check(nRet);
  • Divide Image via Frame Grabber
    When dividing images by frame grabber, you can use SubImageList in structure MV_FRAME_OUT_INFO_EX to get subimages after image division. The sample program is as follows:
    Note
    Only some frame grabber models support image division feature. For supported models, the actual products shall prevail.
    MV_FRAME_OUT stFrameOut = { 0 };
    int nRet = MV_CC_GetImageBuffer(pUser, &stFrameOut, 1000);
    if (nRet == MV_OK)
    {
    if (stFrameOut.stFrameInfo.nSubImageNum > 0)
    {
    for (unsigned int i = 0; i < stFrameOut.stFrameInfo.nSubImageNum; ++i)
    {
    MV_CC_IMAGE* pSubImage = &stFrameOut.stFrameInfo.SubImageList.pstSubImage[i];
    printf("SubImage buffer[%p], size[%d]\n", pSubImage->pImageBuf, pSubImage->nImageLen);
    }
    }
    MV_CC_FreeImageBuffer(pUser, &stFrameOut);
    }



    Previous: Image Acquisition Next: None