Blog

Audio interference


It would appear that the interference isn't caused by a lacking fdk-aac or any other similar library. Looks like I will have to try and filter/resample the audio frame or do something similar.

Not sure exactly how I should go about this so watch this space...

 

by

Compiling Cross Platform. Windows/OS X issues


Since my last post couple of posts I have been working on a few things at the same time.

  1. GTK+ (not a priority)
    As a GUI is not high on the list of requirements I have decided to strip this out for simplicity. The UI I had built thus far was simplistic but also didn't look right on OS X.
  2. Cross Platform
    This is one of my requirements (again not as important), but I think it is something I should be working on throughout development. So far I have been able to get his working on Windows, Linux and OS X (with one slight audio glitch).
  3. Audio Problem
    Transocding a video on Linux works perfectly.
    For Windows or OS X the audio is distorted.
  4. Modifying motion vectors.
    I am currently working on the basis of embedding information in motion vectors.

So, whilst I research exactly how to go about embedding data in motion vectors I am tackling other issues.

GTK+ (now removed) and cross platform compilation are now sorted.

My most interesting issue is the audio, there appears to be a heavy amount of distortion or interference. I have two theories:

  1. This could be similar to my video frames issue where the direct result from the decoder contains some kind of "noise" which causes problems when directly re-encoding. I solved this issue for the picture by copyright the frame data only to a new frame and encoding the new frame. I might have to do something like this with the audio frame, or run it through a filter.
  2. Alternatively, a library might be missing that the Linux version has. I have noticed that the Linux setup has fdk-aac (another audio encoded - it has libfaac-dev and fdk-aac).

I am hoping the solution to my audio issue is a simple solution so I can focus on data embedding. I will commit some time to this problem, but if it seems like this is going to take a too long I will cut my losses and stick with Linux for now.

 

Final remarks
I had some problems setting up FFmpeg to work on Windows. most notible was that codecs where not found. Here is how I solved the issue

  1. Install libx264
    1. git clone git://git.videolan.org/x264.git
    2. cd x264
    3. ./configure --enable-static --enable-shared
    4. make
    5. make install
  2. Install faac
    1. ./configure --prefix=/c/mingw --without-mp4v2
    2. make clean && make && make install-strip
  3. Compile with the following (amongst other things has libx264 and faac enabled).
    1. ./configure --enable-static --enable-libx264 --enable-pthreads --enable-gpl --disable-doc --enable-libfaac --enable-nonfree

 

by

Quick Update


Now that I have a trascode system I have started looking into how to manipulate the motion vectors of certain frames - at the moment this hasn't yielded any progress. Whilst researching this I am looking into developing an AES cryptosystem for encrypting the embedded data.

I have also started trying to get my code to compile under Mac OS. So far, so good - I just need to setup GDK+ (setup SDL, FFmpeg and dependencies without any issues).

 

by

Transcoding in C with FFmpeg


Google doesn’t provide much information on transcoding video in C with FFmpeg – there are no examples anywhere of how to specifically do this. However, with a significant amount of determination and perseverance I have successfully produced a system capable of transcoding a video file. It might have taken a day and a half to have it perfected, but I now feel that I fully understand aspects of video encoding and FFmpeg that I did not understand before.

In my tests I have successfully transcoded an MP4 file using H.264/AAC to another MP4 file using the same codecs. The emphasis for this transcoding system was to produce something capable of parsing the individual audio and video frames, thus allowing manipulation of this data at a later stage. The work I have done thus far has hopefully set the foundations for data manipulation to follow shortly. Given this progress I remain cautiously optimistic that I will be able to adapt what I have done to embed data into video.

Over the last couple of days I have had to deal with the issues associated with processing multiplexed audio and video data. At first this didn’t seem to challenging, all I had to do was process a packet correctly as either audio or video, and then successfully interleave it in the output stream. There were a couple of nuances that did crop up, but most significantly was the problem of the PTS (Presentation Time Stamp) of each video frame. When an incorrect PTS was defined, the video frame would not interleave or the playback of video would be hideously affected. In essence, PTS is used to synchronise separate streams. In the instances that the video would playback the PTS would affect the stream to the point where the image was static, or playing at rate that was significantly more rapid that the audio playback.

After correcting the PTS, dealing with frame noise produced by the decoder and managing frame rate issues I ended up with my current solution which thankfully works!

 

by