Assessing video card performance for canvas rendering in firemonkey


Recently, my team began noticing issues with our current C++ builder project running strangely on different computers. We have narrowed this down to the fact that we are running the application with GlobalUseGPUCanvas set to true. As far as my understanding goes, this means that firemonkey will use the GPU to render the canvas. We noticed that the machines having trouble, were those with less powerful integrated graphics cards. Therefore, I am looking to write some logic which will only enable GlobalUseGPUCanvas is the user has a powerful enough graphics card.

To start this, I have written the following code which I based heavily on this MSDN article. This allows me to loop through all of the video cards on the machine and see how much memory they have.

IDXGIFactory * pFactory = NULL;
CreateDXGIFactory(__uuidof(IDXGIFactory) ,(void**)&pFactory);

IDXGIAdapter * pAdapter;
DXGI_ADAPTER_DESC * adapterDesc;

for (UINT i = 0; pFactory->EnumAdapters(i, &pAdapter) != DXGI_ERROR_NOT_FOUND; ++i)

long vidMemory = adapterDesc->DedicatedVideoMemory;

long sMemory = adapterDesc->SharedSystemMemory;

if( pFactory )

This is a decent start, because I can check the computers video cards to see if they have enough memory to run the program on the video card. However, I am not sure how to handle the scenario where a machine might have a really nice dedicated graphics card, but could have internal graphics enabled instead. For example, I could see the powerful dedicated card and think they have enough memory, but if they were for some reason running off of less powerful integrated graphics, I might be mistaken.

Is there anyway to be able to tell which of these graphics cards I am enumerating through is the active graphics device?

Comments are closed.