I wanted to do some light measurements (illuminance, to be exact), but didn’t want to spend hundreds of euro’s on a light meter. I realized that I actually have a quite good light meter in my pocket: my smartphone’s camera. It doesn’t measure illuminance, but luminance, but that can be worked around.
Archive for the ‘Multimedia’ Category
Just before the holidays, we got ourselves a Dreambox DM8000, an HD-capable set-top box and personal video recorder. The hardware features look good: twin DVB-S2 tuner build in, 2 free slots for additional tuners, e.g. for DVB-T or -C reception. It has 4 CI (Common Interface) slots for Conditional Access Modules (CAM’s), and 2 smartcard readers. Under the hood is a 400MHz MIPS processor, running a special linux, named “Enigma2” from a flash-chip. An optional harddisk and (slimline) DVD-reader are also available, but you have to build them in yourself. Doing so was really easy, as all screws and connectors are provided.
The installed software is descent, tunes fast and didn’t crash (yet). It supports decoding multiple channels from a single multiplex (known as multirec in the MythTV-world). The manual (local copy) could be more thorough, though. The recording-scheduling simply sucks, especially if you’re used to MythTV’s scheduling.
x264 is an open source h.264 encoder. Since r1177 it includes a preset system. The presets give an easy way to balance quality vs encoding speed, ranging from placebo (highest quality) to ultrafast (lowest quality).
I wanted to get an idea what kind of quality/time gains could be archived with these settings. Usually the first steps of extra quality are barely noticeable in encoding time, while the last bits of quality cost significantly more. To verify this statement and to quantify it, I encoded 2 video sequences at 2 resolutions using all 9 available presets. I used PSNR as metric. I’d be the first to admit that PSNR does not correspond to quality, but it correlates reasonably well.
The internet is filled with guides and howto’s for getting video on you iPhone. The specs specify the iPhone to support h.264, baseline profile, level 3.0. Translated this means:
- No B-frames
- No CABAC
- No weighted predictions
- No 8×8 DCT
- Max resolution around 640×640 (technically 1620 MacroBlocks, 16×16 each)
- Max 25fps at that resolution (technically 40500 MacroBlocks per second)
- Max 10Mbps
The iPhone imposes some extra limitations:
- Max 640×480, 30fps
- Max 2.5Mbps
Most guides on the internet additionally force the number of reference frames down to 1 (ffmpeg‘s -refs parameter), although I could no find any specsheet imposing this limit. So I decided to test this.
I needed to convert a raw YUV image to something viewable on my computer. There are a few tools to do so, including the wonderful ImageMagick toolset. However, running my image through this did not work as is.
The source image is a frame from an HD-SDI stream, which has a color-depth of 10 bits instead of the usual 8. The sample packing seems to be UYVY, but using 16bit per component (with the lower 6 bits always 0).
Since I only wanted to have a quick view of the frame, I just discarded the lower 2bits (i.e. byte) and converted that instead. This perl-script discards the least significant byte:
while( read STDIN, my $block, 2 ) { my @value = split //, $block; print $value[0]; }
The output can be fed into ImageMagick’s convert tool:
convert -size 1280×720 -interlace none -sampling-factor 4:2:2 8bit.uyvy out.bmp