ios - Understanding Core Videos CVPixelBufferPool and CVOpenGLESTextureCache semantics -


I am restructuring my iOS OpenGL-based rendering pipeline. My pipeline includes several rendering steps, so I need a lot of intermediate texture from rendering and reading them. They are of different types of textures (unsigned bytes and half float) and can be a different number of channels.

To save the memory and allocation effort, the recycled made using the previous steps in the pipeline is now necessary in my previous implementation, I did it on my own.

In my new implementation, I want to use the API provided by the Core Video Framework; Especially since they provide faster access to the CPU than textured memory. I think that allows me to make OpenGL textures that can be made directly or use anybody. However, I tell you how they actually work and how they play together, they do not get any documentation.

Here I want to know:

  • To get a texture, I need to always provide a pixel buffer with CVOpenGLESTextureCache . If I need to make memory available anyway and it is not able to achieve the old, unused texture, then why is it called "cash"?
  • The CVOpenGLESTextureCacheFlush function "currently triggers unused resources". How does cache know that a resource is "unused"? When I issue related CVOpenGLESTextureRef , then what's the text returned to the cache? The same question applies to CVPixelBufferPool .
  • Am I able to maintain texture with different properties (type, # channel, ...) in a texture cache? Does it know that on the basis of my request a texture can be reused or need to be made?
  • CVPscelBufferPower s is only for being able to manage buffers of the same type. This means that I need to create a dedicated pool for every texture configuration, right?

At least some of these questions can be clarified, so I'm really happy.

Yes, okay you will not really be able to find anything. I looked and saw and the little answer is that you need to see how to implement the implementation. You can find your blog post on the subject. Actually, on the example code, it seems that the way it works is that the texture "holds" in relation between the object object buffer (in the pool) and the OpenGL texture Which is bound to execute a triangle render. The result is that the same buffer should be returned by the pool with OpenGL until it is returned. In the strange case of a race situation, a buffer for a buffer can grow up to 1 buffer which is held for a very long time. Is it really good about the Texture Cache API that only one data buffer Once written, as opposed to calling an ADI like "uploads" data for Glottix card


Comments

Popular posts from this blog

c# - SignalR: "Protocol error: Unknown transport." when navigating to hub -

class - Kivy: how to instantiate a dynamic classes in python -

python - mayavi mapping a discrete colorbar on a surface -