Question: Video Infragram

jfd is asking a question about infragram
Follow this topic

by jfd | December 14, 2016 08:37 | #13783

C implementation of Infragram for realtime image processing

We would like to process images in realtime - e.g. at least a 30FPS framerate at HD resolutions. I'm looking for a C implementation of Infragram which could run with some DSP modifications on a 1Ghz+ microprocessor. Worst case, I plan to simply port the JS to C if that has not been done already.

Background story

I've developed a camera system based on Raspberry Pi which we are using for aerial and atmospheric videography and research using experimental rockets and drones. It's basically running raspivid and audio capture for offline muxing into MP4. No big deal, other than the flexibility to develop custom algorithms and now that the base HW setup and SW load has been tested it's a good baseline.

Using a Pi-Zero camera system is about $5 (Pi) +$15 (Cam) +$5 (Lipo) +$5 (PiZero Lipo) + $5 (SD-Card 16MB). At $30 it is cheaper than the Mobius camera and allows a much more flexible firmware to be developed.

I've recently modified this system with a 5Ghz RF Transmitter for use in drones or live telemetry and plan to build a modified camera sensor to conduct agricultural research in Central Valley CA.

Has anyone already done a C implementation? Happy to share any of our findings and/or port of the same.

For more info see here:

Thank you and best regards -James


Hi, this is a good question, but I wanted to suggest possibly adapting the WebGL implementation which is awfully fast:

In either case, I'm very interested in reworking infragram's core libs and adding a test suite -- whether implemented in JS, GL, or C, we should be able to pass in an image and a transformation expression, and get the same output -- that way peoples' infragram expressions would be portable between different implementations, and developing a new implementation would be easier since it'd have to pass the same test suite.

I didn't write the GL implementation, but have worked on the JS one a lot, and it needs a lot of structural/modularity work. Happy to offer more input as you decide which way to go, and perhaps we could select a set of before/after images to use as our common test suite. Thanks!

Hi Warren,

Thank you for your email; I see this is a greenfield area. I'd like to get this implementation to run as fast as possible. If you could point me to the relevant bits of the JS code for just the basic frame-level processing, I can start working on C code for this. My plan is to first get it to work witin an RGB frame-buffer, and then look at how we can use this as the output stage of the YUV decoder output. Goal here would be process 30fps at some resolution (subject to the CPU and memory subsystem loading); if needed we can dip down into assembler after we do the basic C model. Presumably there is some write-up on the transforms, so maybe I should just start there and do an implementation based on that.

Well, basically we just take the incoming image pixel by pixel, and for each of the input pixels, we plug their color values into the "infragrammar" expression provided, and assign the result to the outgoing pixel value (in monochrome). There's a bit more description here:

And the JS code is here:

The idea is to make a very very simple means to use textually-input math functions to transform images. So r_exp() (etc) is the provided function; we just wrap a simple expression like R*G-23+B in JavaScript.

Are you developing on GitHub, and might I follow your progress there?

Is this a question? Click here to post it to the Questions page.

Reply to this comment...

Nice! Thank you Warren, ok I will check it out. Yes, I will develop on Github... I will have some time in next 2 weeks, once I get a prototype going will send you a pointer.

best -james

Reply to this comment...

Log in to comment