D3D Solution for YUV Playback under WPF

  • 2021-09-16 06:44:24
  • OfStack

In the construction of video media playing and monitoring system, the display of YUV data is often involved. 1-like playback controls and SDK are rendered directly on the window using DirectDraw by using the Window handle. However, if the user interface is developed using WPF, it can usually only be done by WinFormHost embedding WinForm in the WPF interface. However, doing so will encounter the problem of AeroSpace, that is, the control of winform will always float on the top layer of WPF, any WPF element will be covered, and the user experience will be poor when zooming and dragging. The reason is that WPF and Winform use different rendering techniques.

To perfectly support the display of YUV data in WPF, the usual solution is to convert YUV data into RGB data supported by WPF, and then use controls similar to WriteableBitmap to display it on WPF. The main problem of doing this is that when doing RGB conversion, it needs to consume a lot of CPU, and the efficiency is relatively low. One optimization method is to use SwScale in FFMPEG or IPP in Intel. These libraries have been optimized for limited hardware acceleration. The following is an example of using WritableBitmap.

WriteableBitmap imageSource = new WriteableBitmap(videoWidth, videoHeight, 
 DPI_X, DPI_Y, System.Windows.Media.PixelFormats.Bgr32, null); 
int rgbSize = width * height * 4; // bgr32 
IntPtr rgbPtr = Marshal.AllocHGlobal(rgbSize); 
YV12ToRgb(yv12Ptr, rgbPtr, width, height); 
//  Update image  
Interop.Memcpy(this.imageSource.BackBuffer, rgbPtr, rgbSize); 

Another solution is to use D3DImage as a bridge between WPF and graphics card. With the help of D3DImage, we can directly send the rendered part of D3D to WPF for display. One reference is the application of VMR9 in WPF. VMR9 is the Render of DirectShow provided by Microsoft. After carefully referring to the codes related to VMR9 in WpfMediaTookit, its core idea is to output an D3D9Surface when initializing DirectShow to build VMR9 renderer, and D3DImage will use this Surface as BackBuffer. When a new video frame is rendered on the Surface, VMR9 will send an event notification. After receiving the notification, D3DImage can refresh BackBuffer for 1 time. The following code shows the core idea.

private VideoMixingRenderer9 CreateRenderer() { 
 var result = new VideoMixingRenderer9(); 
 var cfg = result as IVMRFilterConfig9; 
 var notify = result as IVMRSurfaceAllocatorNotify9; 
 var allocator = new Vmr9Allocator(); 
 notify.AdviseSurfaceAllocator(m_userId, allocator); 
 //  In building VMR9 Render Register a new video frame rendering completion event when  
 allocator.NewAllocatorFrame += new Action(allocator_NewAllocatorFrame); 
 //  Register to receive new D3DSurface Events created  
 allocator.NewAllocatorSurface += new NewAllocatorSurfaceDelegate(allocator_NewAllocatorSurface); 
 return result; 
 void allocator_NewAllocatorSurface(object sender, IntPtr pSurface) 
  //  For ease of understanding, only the core part is reserved. Omission rewrites other parts  
  //  Will pSurface Set to D3DImage Adj. BackBuffer 
  this.m_d3dImage.SetBackBuffer(D3DResourceType.IDirect3DSurface9, pSurface); 
 void allocator_NewAllocatorFrame() 
  //  Redraw  
  this.m_d3dImage.AddDirtyRect(new Int32Rect(0, /* Left */ 
    0, /* Top */ 
    this.m_d3dImage.PixelWidth, /* Width */ 
    this.m_d3dImage.PixelHeight /* Height */)); 

Therefore, as long as the video is played using DirectShow, it can be perfectly displayed on WPF with the help of VMR9. But most of the time, DirectShow can't solve all the problems. For example, when doing interactive video optimization or video superposition, DirectShow with fixed filter pipeline is difficult to meet the requirements. Sometimes, it is necessary to render directly.

From the example of VMR9, we can see that generating an D3D9Surface and rendering it on it is the key. Then the remaining problem is how to render the YUV data to D3D9Surface.

YUV image format is not directly supported by D3D. Therefore, we need to find a way to make D3D render YUV data. In the process of rewriting with C #, I suddenly found that D3D has provided a simpler way to help us realize the color space conversion from YUV to RGB, and it is directly supported by graphics hardware. The efficiency is quite high. The main principle is the StrentchRectangle method with the help of D3DDevice.

public void StretchRectangle( 
 Surface sourceSurface, 
 Rectangle sourceRectangle, 
 Surface destSurface, 
 Rectangle destRectangle, 
 TextureFilter filter 

The main function of the StrentchRectangle method is to copy the contents of an area on one Surface to a specified area on another Surface. In the process of Copy, as long as it is the format directly supported by the graphics card, such as YV12, YUY2, etc., it will automatically convert D3D PixelFormat! Therefore, we just need to create an D3D OffscreenPlainSurface with the specified PixelFormat, fill in the raw data, call StrentchRectangle to copy to the target Surface, and we have the desired Surface. The rest is left to D3DImage. The following is the core part of the example code

public void Render(IntPtr imgBuffer) 
 lock (this.renderLock) 
  //  Fill the image data into the offscreen surface 
  //  Call StrentchRectangle Put the original image data Copy To TextureSurface Medium          
  //  Perform a rendering operation  
 //  Notice D3DImage Refresh image  

Related articles: