I saw the new ati 9700 graphics card and it seems to offer a lot of hardware accelerater things :) Anyone knows if it can be used in video editing,maybe speed things up? It says it has mpeg2 decompression and idct(as usual) but it also does color space conversion too.And it has a noise removal for captures, adaptive dieinterlacing and frame rate conversion too...Sounds interesting enough,but can we use it?
--------------------- Current: 2003 330i/6 ZHP Silbergrau on Black Previous Cars: 2010 MGM Toyota Tacoma 4x4 DC TRD Off Road 1999 ///M3 Titan Silber on Black 1998 ///M3 Estoril Blau on Light Grey
These hwaccel funcions are usually "write-only". It means that the picture ends up in video memory where the only remaining sensible thing to do is to display them.
Have to corroborate gabest on this - trying to use those hardware features to speed up encoding will get you about as far as trying to use the RealHollywood+'s MPEG2 decoding to transcode MPEG2 streams.
actually, in the 9700, ATI did away with separate hardware decoders for video, and are using the vertex and pixel shaders for hardware video accelleration. They are using completely closed algorithms (last I checked), but since they are using vertex and pixel shaders which M$ provides full documentation for, it shouldn't be too hard to make some filters that reproduce the same functionality. It obviously wont be as fast as ATI's versions, but should be useable on any directX 8.x supporting video card. I've been thinking about this for a while(about 10 months now), and if it can be done correctly, it should be blazingly fast due to most directx 8 video cards having fast processors and screaming fast RAM. There are some problems, but they shouldn't be too bad. I just wish I had more time to research this. I swear, life takes up too much of my valuable encoding-related time :rolleyes:
The hw decoding in vram and sending back to ram for the codec is already possible, but I'm still not sure how fast the blt from vram to ram would be. Just try to lock on a directdraw surface in vram and do some image filtering effect on its data, it will be horribly slow.
yeah, I know most of the hardware interaction theoreticals, but haven't had time to learn C yet, much less the amount of directx that would be required to do any of this (I've pretty much only done VB stuff so far, fairly separated from the guts of any memory transfer). My assumption is that if you just convince the GFX card it is working on a texture, or just plain doing math for you (the easy part). then get the cpu to read the 'texture' back from the GFX card using a blt or something similarly fast, but as I said, I haven't had to work with direct access to memory before. anyway, since I'm not even sure if we are on topic anymore, I'm gonna go sleep and maybe look into this more at a later time