c++ - How to set decode pixel format in libavcodec? -


i decode video via libavcodec, using following code:

//open input file if(avformat_open_input(&ctx, filename, null, null)!=0)     return false; // couldn't open file if(avformat_find_stream_info(ctx, null)<0)     return false; // couldn't find stream information videostream = -1; //find video stream for(i=0; i<ctx->nb_streams; i++) {            if((ctx->streams[i])->codec->codec_type==avmedia_type_video)     {         videostream=i;         break;     } } if (videostream == -1)     return false; // didn't find video stream video_codec_ctx=ctx->streams[videostream]->codec; //find decoder video_codec=avcodec_find_decoder(video_codec_ctx->codec_id); if(video_codec==null)     return false; // codec not found if(avcodec_open(video_codec_ctx, video_codec)<0)     return -1; // not open codec video_frame=avcodec_alloc_frame(); scaled_frame=avcodec_alloc_frame(); static struct swscontext *img_convert_ctx;  if(img_convert_ctx == null)  {       int w = video_codec_ctx->width;       int h = video_codec_ctx->height;       img_convert_ctx = sws_getcontext(w, h,                          video_codec_ctx->pix_fmt,                          w, h, dst_pix_fmt, sws_bicubic,                          null, null, null);       if(img_convert_ctx == null) {         fprintf(stderr, "cannot initialize conversion context!\n");         return false;       } } while(b_play)  {     if (av_read_frame(ctx, &packet) < 0)     {         break;     }     if(packet.stream_index==videostream) {     // decode video frame            avcodec_decode_video2(video_codec_ctx, video_frame, &framefinished,                          &packet);         // did video frame?         if(framefinished)          {             if (video_codec_ctx->pix_fmt != dst_pix_fmt)             {                                        if (video_codec_ctx->pix_fmt != dst_pix_fmt)                                  sws_scale(img_convert_ctx, video_frame->data,                                video_frame->linesize, 0,                                video_codec_ctx->height,                                scaled_frame->data, scaled_frame->linesize);                           }                    } } av_free_packet(&packet); } 

the code works correctly, necessary convert each frame required format. possible set pixel format decoding correct format without sws_scale?

many answers.

ffmpeg's avcodec instances (static decoder "factory" objects) each define array of pixel formats support, terminated value -1.

the avcodeccontext (decoder instance) objects have callback function pointer called get_format: function pointer in structure.

this callback function called, @ point in codec initialization, avcodec factory object's array of supported formats, , callback supposed choose 1 of formats array (kind of "pick card, card") , return value. default implementation of get_format callback function called avcodec_default_get_format. (this installed avcodec_get_context_defaults2). default function implements "pick format" logic quite simply: chooses first element of array isn't hardware-accel-only pixel format.

if want codec work different pixel format, can install own get_format callback context object. however, callback must return 1 of values in array (like choosing menu). cannot return arbitrary value. codec support formats specifies in array.

walk array of available formats , pick best one. if you're lucky, it's exact 1 want , sws_scale function won't have pixel format conversion. (if, additionally, don't request scale or crop picture, sws_scale should recognize conversion noop.)


Comments

Popular posts from this blog

delphi - How to convert bitmaps to video? -

jasper reports - Fixed header in Excel using JasperReports -

python - ('The SQL contains 0 parameter markers, but 50 parameters were supplied', 'HY000') or TypeError: 'tuple' object is not callable -