Thursday 1 September 2016

FFMPEG detecting black frames and generating keystamps


In my previous post I looked at using FFMPEG for adding black frames, colour bars and slates using the complex_filter switch and a number of libavfilters including with FFMPEG. Today I'm going to follow on from that using the videofilter (-vf) switch to look at generating information from the incoming media stream.

Detecting Black Frames
This is quite a typical video processing application, particularly at the professional end where content may intentionally have a sequence of black frames inserted to identify commercial breaks. It's useful to be able to identify those and 'segment' the video sequence to be put into an editor or automated process. 

FFMPEG has two libavfilters for this blackdetect and blackframes. We're going to use the former which has a syntax like the following:

ffmpeg –i myfile.mxf –vf “blackdetect=d=2:pix_th=0.00” –an –f null –


The blackdetect filter takes in a parameter to indicate the duration period of black frames (d=2) and the threshold of frame 'blackness' pix_th=0.00. The other options above are just to stub off the output.

This puts the information out to stdout (e.g. the console) which is pretty easily processed. There are quite a lot of online discussions about how to process that in various ways into text or csv files for ongoing work and some options on using ffprobe that are worth exploring.

There are similar audio filters for detecting silence.

Generating Scene-Change Frames
Another typical video processing task is generating a sequence of representative keystamps (image frames) for the video sequence, ideally with each of those representing a 'scene' in the sequence. There's a whole lot of discussion we can go into here on what constitutes a scene and processing techniques for identifying that. That's not the topic here. This is about demonstrating what FFMPEG can offer, take or leave it!

One of the filtering functions offered by FFMPEG is to be able to make a conditional decision based on the processed data. In this case we are going to use the gt (greater) comparison and compare the 'scene' information against a threshold value which is between 0-1.0. We'll then scale the output of that to our desired keystamp resolution and indicate the number of frames we want to generate. The syntax looks like this:


ffmpeg –i myfile.mxf –vf “select=gt(scene\,0.4),scale=640:320” –frames:v 5 –vsync vfr thumbs%0d.png

It appears that there is a 'bug' or argued over functionality, but if you do not include the -vsync vfr option you'll only get the first detected output repeated.

Generating a Single Scene Change Tile
Another pretty typical operation is wanting to summarise a whole file into a single keystamp with a number of tiled images giving a summary of the whole content. FFMPEG nicely provides us with a function for doing that as well. Warning, processing this seems to be quite slow.

ffmpeg –i myfile.mxf –vf “select=gt(scene\,0.4),tile,scale=640:320” thumbs%0d.png


In this case what we're doing is passing the output identified images from the scene comparison into the 'tile' filter in the video processing chain.


Some of this and a well written general intro to using FFMPEG that is an easier intro that the canonical documentation can be found in this article on the swiss knife of video processing.







Tuesday 30 August 2016

RegEx MiniGrammar

I was working on some code the other day and quickly needed to drop in a RegEx to match a particular pattern and once again needed to Googling the syntax and check some example web-wisdom to get what I needed quickly.

To save myself next time, I'm blogging some self notes cribbed from the easiest places so I can refer them here rather than search around. This might start to become a behaviour pattern in itself!

In many cases my usual regexs are just simple globbing helpers [glob patterns are sets of filenames expressed with wildcard characters, such as *.txt] to filter out particular files to process. More recently I've been increasingly using them to pattern match portions of text in URLs and other data sets.

Simple MiniGrammar reminder:

Alternatives (or)
Using the pipe character
e.g. gray|grey matches "grey" and "gray".

Grouping
Parts of the pattern match can be isolated with parentheses
e.g. gr(e|a)y matches "gray" and "grey".

Occurances
A number of characters can be used to express the number of occurances of the pattern in the match:

Optional (zero or one) is indicated with ?
e.g. colou?r matches "color" or "colour"

None to many (zero or n) is indicated with *
e.g. tx*t matches "tt", "txt" and "txxxxxxt".

Any number (zero or n) is indicated with +
e.g. go+al matches "goal", "goooal" and "goooooooal".

Exactly a certain number of times is given by {n}
e.g. gr{3} matches "grrr"

Bounded number of times is given by {min,max}

Metacharacters

Match a single character from a set using square brackets:
e.g. gr[ae]y matches "grey" or "gray"

Character ranges are expressed using the dash/minus sign:
e.g. [a-z] matches a single character from a to z or [a-zA-Z] matches lower case and capitals.

Except ranges are expressed by putting a ^ in the bracket. This indicates all characters except those following. It's most common case is matching against everything apart from whitespace:
e.g. [^ ] matches all characters except whitespace.

Start of string is matched by ^
End of string is matched by $



Examples
And some of the examples I find helpful with a bit of commentary:

match whitespace at the start or end of a file: ^[ \t]+|[ \t]+$
Explanation: ^ is used to set the start of the string, the square brackets indicate can contain either a space or tab (\t) whitespace character. This sequence can be repeated zero or many times (plus character). OR it matches to the whitespace found at the end of the string as indicated by $.




Thursday 25 August 2016

Song Exploder


I've been listening to this podcast for over a year now after seeing it listed as one of the best holiday postcasts in the Guardian before our long driving holiday in 2015. It was also showcased in more detail in the Observer.



I took a bunch of unlistened episodes again on holiday this year and have listened through some of my old favourites again. What is so interesting is how widely varied the creative processes are between musicians and bands and how ideas come from the smallest of inspirations and are then honed and crafted. It's particularly interesting to hear how the musical and lyrical elements are developed together.

Here are some of my favourite episodes:

78 Grimes - Kill v. Maim
Interesting insight into the creative process, defining songs as either a kick-drum song or a guitar song. This track gets as much from the drums as possible with 40 layers of drum tracks!

76 Chvrches - Clearest Blue
I just love this band and have been listening to them for a while now. Their Glasto 2016 set was one of my highlights this year. It is interesting to hear their process and the idea of strarting the creative process with a set rules for the band, like a two chord rule and then breaking them.

66 KT Tunstall - Suddenly I see
KT Tunstall is amazing, from her first blow-away performance on Jools Holland using a looper pedal to the way that she plays acoustic guitar. The insight into the development of this track is how her producer identifies that her strong rhythm developed from busking was getting diluted by drummer layering over stock beats. This track is inspired by looking at artists you love.

43 Sylvan Esso - Coffee
I love the use of sound samples including the Little Tikes xylophone.

40 Ramin Djawadi - Game of Thrones theme
Everyone must have tried playing the catchy riff at some point! It's interesting hearing the development of the track and some of the derivatives including an 8bit version!

24 Tycho - Awake
Love the album, so it was interesting to hear the development process.

17 Anamanaguchi - Prom Night
I particularly liked hearing about the tech use of 8bit Nintendo sounds and the creative process around the use of synthetic vocals

9 Polica - Smug
And another exploration of how particular kit can drive the development of a song.

7 Jeff Beal - House of Cards
How the darkness and gloom was worked into this track to evoke a particular atmosphere.

6 Daedelus - Experience
I can remember listening this podcast at the end of a long drive last year. It only uses acoustic sounds with the interesting accordion riff that eventually  became part of a hip-hop classic!





Wednesday 24 August 2016

FFMPEG for adding black, colourbars, tone and slates

There’s quite a few scattered comments online about doing parts of this with FFMPEG, but nothing cohesive and it seemed a bit hit and miss getting some of the bits to work at first. I've collected together the steps I worked through along with some of the useful references that helped along the way. It’s not complete by any means, but it should be a good starter for building upon.

The first reference I turned up was to add black frames at thestart and end of the video giving a command line as follows:

ffmpeg -i XX.mp4 -vf "
    color=c=black:s=720x576:d=10 [pre] ;
    color=c=black:s=720x576:d=30 [post] ;
    [pre] [in] [post] concat=n=3" -an -vcodec mpeg2video -pix_fmt yuv422p -s 720v576 -aspect 16:9 -r 25 -minrate 30000k -maxrate 30000k -b 30000k output.mpg

This gives the starting point using the –vf command to setup a concat video effect. The –vf option only allows a single input file into the filter graph but allows the definition of some processing options inside the function. The post goes on to look at some of the challenges that the author was facing mostly related to differences between the input file resolution and aspect ratio. I played around with this in a much more simplified manner using an MXF sample input file as follows:

ffmpeg -i XX.mxf -vf "
    color=c=black:s=1920x1080:d=10 [pre] ;
    color=c=black:s=1920x1080:d=30 [post] ;
    [pre] [in] [post] concat=n=3" –y output.mxf

In my case here I’m using the same output codec as the input so hence the simpler command line without additional parameters. The –y option means overwrite any existing files which I’ve just used for ease. The key to understanding this is the concat filter which took me quite a bit of work.

As a note, I’ve laid this out on multiple lines for readability, but it needs to be on a single command line to work.

Concat Video Filter
The concat video filter is the key to this operation which is stitching together the three components. There are a couple of concepts to explore, so I’ll take them bit by bit, some of the referenced links will use them earlier in examples so it might be worthwhile skipping the references first, reading through to the end and then dipping into the references as needed to explore more details once you have the full context.

Methods of concatenating files explains a range of ffmpeg options including concat which is descibed with this example:

ffmpeg -i opening.mkv -i episode.mkv -i ending.mkv \
  -filter_complex '[0:0] [0:1] [1:0] [1:1] [2:0] [2:1] concat=n=3:v=1:a=1 [v] [a]' \
  -map '[v]' -map '[a]' output.mkv

Notice in this example the \ is used to spread the command line over multiple lines as might be used in a linux script (this doesn’t work on windows). In this case there are multiple input files and the filter_complex command is used instead. As most of the examples use –filter_complex instead of –vf I’ll use that from now on. I had a number of problems getting this to work initially which I’ll describe as I go through.

In this case the concat command has a few more options:

concat=n=3:v=0:a=1" : 

concat means use the media concatenate (joining) function.
n means confirm total count of input files.
v means has video? use 0 = no video, 1 = contains video.
a means has audio? use 0 = no audio, 1 = contain audio.

Some clues to understanding how this works are given with this nice little diagram indicating how the inputs, streams and outputs are mapped together:

                   Video     Audio
                   Stream    Stream
                   
input_file_1 ----> [0:1]     [0:0]
                     |         |
input_file_2 ----> [1:1]     [1:0]
                     |         |
                   "concat" filter
                     |         |
                    [v]       [a]
                     |         |
                   "map"     "map"
                     |         |
Output_file <-------------------

Along with the following description:

ffmpeg -i input_1 -i input_2
 -filter_complex "[0:1] [0:0] [1:1] [1:0] concat=n=2:v=1:a=1 [v] [a]"
-map [v] -map [a] output_file
The above command uses:
  • Two input files are specified: "-i input_1" and "-i input_2".
  • The "conact" filter is used in the "-filter_complex" option to concatenate 2 segments of input streams.
  • Two input files are specified: "-i input_1" and "-i input_2".
  • "[0:1] [0:0] [1:1] [1:0]" provides a list of input streams to the "concat" filter. "[0:1]" refers to the first (index 0:) input file and the second (index :1) stream, and so on.
  • "concat=n=2:v=1:a=1" specifies the "concat" filter and its arguments: "n=2" specifies 2 segments of input streams; "v=1" specifies 1 video stream in each segment; "a=1" specifies 1 audio stream in each segment.
  • "[v] [a]" defines link labels for 2 streams coming out of the "concat" filter.
  • "-map [v]" forces stream labeled as [v] go to the output file.
  • "-map [a]" forces stream labeled as [a] go to the output file.
  • "output_file" specifies the output file.
Filter_Complex input mapping
Before we get onto the output mapping, let’s look at what this input syntax means – I cannot remember quite where I found the information, but basically the definition of the concat command n, v, a gives the ‘dimensions’ of the input and output to the filter. So there will be v+a outputs and n*(v+a) inputs.

The inputs are referenced as follows: [0:1] means input 0, track 1, or is defined [0:v:0] which means input 0, video track 0.

There needs to be n*(v+a) of these, arranged (v+a) in front of the concat command. For example:

Concat two input video sequences
“[0:0] [1:0] concat=n=2:v=1:a=0”

Concat two input audio sequences (assuming audio is on the second track)
“[0:1] [1:1] concat=n=2:v=0:a=1”

Concat two input AV sequences (assuming audio is on the second and third tracks)
“[0:0] [0:1] [0:2] [1:0] [1:1] [1:2] concat=n=2:v=1:a=2”

Getting this wrong kept producing this cryptic message:

"[AVFilterGraph @ 036e2fc0] No such filter: '
'
Error initializing complex filters.
Invalid argument"

Which seems to be the root of some other people's problems as well.

Output mapping
Taking the rest of the filter, it is possible to put mappings after the concat command to identify the outputs:

concat=n=2:v=1:a=1 [v] [a]"
-map [v] -map [a]

These can then be mapped using the map command to the various output tracks which are created in order, so if you had four audio tracks it would look something like this:

concat=n=2:v=1:a=4 [v] [a1] [a2] [a3] [a4]"
-map [v] -map [a1] -map [a2] -map [a3] -map [a4]

This can be omitted and seems to work fine with a default mapping.

Notice in this case that the output tracks have all been named to something convenient to understand, likewise these could be written as follows:

concat=n=2:v=1:a=4 [vid] [engl] [engr] [frl] [frr]"
-map [vid] -map [engl] -map [engr] -map [frl] -map [frr]

This is also possible on the input as follows:

“[0:0] [vid1]; [1:0] [vid2]; [vid1] [vid2] concat=n=2:v=1:a=0”

Which is pretty neat and allows a bit more clearer description.

Generating Black
Now we’ve got the groundwork in place we can create some black video. I found two ways of doing this, first in the concat filter:

ffmpeg –i XX.mxf –filter_complex “[0:0] [video]; color=c=black:s=1920x1080:d=30 [black]; [black] [video] concat=n=2:v=1:a=0”

This has a single input file which we take the video track from and then create an input black video for 30secs and feed into the concat filter to produce a video stream.

Alternatively a number of samples show the input stream being created like this:

ffmpeg –i XX.mxf -f lavfi -i "color=c=black:s=1920x1080:d=10" –filter_complex “[0:0] [video]; [1:0] [black]; [black] [video] concat=n=2:v=1:a=0”

This all works fine, so let’s now add some audio tracks in. We’ll need to generate a matching audio track for the black video. I found when I was first playing with this that the output video files were getting truncated because the duration of the output only matched the ‘clock-ticks’ of the duration of the input video. The way I’ll do this is to generate a tone using a sine wave function and set the frequency to zero, which just saves me explaining this again later.

ffmpeg –i XX.mxf –filter_complex “color=c=black:s=1920x1080:d=10 [black]; sine=frequency=0:sample_rate=48000:d=10 [silence]; [black] [silence] [0:0] [0:1] concat=n=2:v=1:a=1”

And similarly if we wanted to top and tail with black it works like this:

ffmpeg –i XX.mxf –filter_complex “color=c=black:s=1920x1080:d=10 [black]; sine=frequency=0:sample_rate=48000:d=10 [silence]; [black] [silence] [0:0] [0:1] [black] [silence]  concat=n=3:v=1:a=1”

or it doesn’t! It seems that you can only use the streams once in the mapping… so it’s easy enough to modify to this:

ffmpeg -i hottubmxf.mxf -filter_complex "color=c=black:s=1920x1080:d=10 [preblack]; sine=frequency=0:sample_rate=48000:d=10 [presilence]; color=c=black:s=1920x1080:d=10 [postblack]; sine=frequency=0:sample_rate=48000:d=10 [postsilence]; [preblack] [presilence] [0:0] [0:1] [postblack] [postsilence] concat=n=3:v=1:a=1" -y output.mxf

Which is a bit more of a faff, but works.

ColourBars and Slates
Adding ColorBars is simply a matter of using another generator like this to replace black at the front with colorbars and this time generating a tone:

ffmpeg -i hottubmxf.mxf -filter_complex "
testsrc=d=10:s=1920x1080 [prebars];
sine=frequency=1000:sample_rate=48000:d=10 [pretone];
color=c=black:s=1920x1080:d=10 [postblack];
sine=frequency=0:sample_rate=48000:d=10 [postsilence];
[prebars] [pretone] [0:0] [0:1] [postblack] [postsilence]
concat=n=3:v=1:a=1" -y output.mxf

Let’s add the black in back at the start as well:

ffmpeg -i hottubmxf.mxf -filter_complex "
testsrc=d=10:s=1920x1080 [prebars];
sine=frequency=1000:sample_rate=48000:d=10 [pretone];
color=c=black:s=1920x1080:d=10 [preblack];
sine=frequency=0:sample_rate=48000:d=10 [presilence];
color=c=black:s=1920x1080:d=10 [postblack];
sine=frequency=0:sample_rate=48000:d=10 [postsilence];
[prebars] [pretone] [preblack] [presilence] [0:0] [0:1] [postblack] [postsilence]
concat=n=4:v=1:a=1" -y output.mxf

Now let’s add the title to the black as a slate, which can be done with the following:

drawtext=fontfile=OpenSans-Regular.ttf:text='Title of this Video':fontcolor=white:fontsize=24:x=(w-tw)/2:y=(h/PHI)+th

Which I found along with some additional explanation for adding text boxes.

This can be achieved in the ffmpeg filtergraph syntax by adding the filter into the stream. In each of the inputs these filter options can be added as comma separated items, so for the [preblack], let’s now call that [slate] and would look like this:

color=c=black:s=1920x1080:d=10 ,drawtext=fontfile='C\:\\Windows\\Fonts\\arial.ttf':text='Text to write':fontsize=30:fontcolor=white:x=(w-text_w)/2:y=(h-text_h-line_h)/2 [slate];

Note the syntax of how to refer to the Windows path for the font.

This puts the text in the middle of the screen. Multiple lines are then easy to add.



Friday 19 August 2016

dotnet core - Native Cross Platform

Following my initial foray into the world of dotnet core I was very excited at what this offers. I can now take my C# code and really easily build and run it on any of my Windows, Linux and Mac platforms. True, I've been able to do that for a while with Mono, but the tooling seems to be making it a common operation on each and something that I can see the toolchain working well with on Docker and Cloud deployments.

Pumped up by this and reading through the final post that I mentioned in the last article I wanted to have a look at a native build of the application. There are a number of reasons for this that we'll dig in to. First of all the build completed in the last post is a dotnet portable application. Take a look in bin/Debug/netcoreapp1.0 and you can see this is creating a dll that the dotnet core exe uses to run the application. It's a jumping off point. It also means that you need to have the dotnet core runtime and libraries already installed. This is possible with Linux deploys and there are a bunch of examples out there and with a bit more faff it's possible with Docker, but I was interested in making this really simple by being able to copy over a single file.

What's more, I had a taste for wanting to be able to build on my Windows or Mac box and create the Linux cross-compiled version that I could then put on a Docker container and use in the Cloud.

In the post there is a very nice walk through of setting up for various native compile targets. This requires some modifications to the basic project.json file as follows:


{
  "version": "1.0.0-*",
  "buildOptions": {
    "debugType": "portable",
    "emitEntryPoint": true
  },
  "dependencies": {
  },
  "frameworks": {
    "netcoreapp1.0": {
      "dependencies": {
        "Microsoft.NETCore.App": {
          "version": "1.0.0"
        }
      },
      "imports": "dnxcore50"
    }
  },
  "runtimes": {
    "win7-x64": {},
    "ubuntu.14.10-x64": {}
  }
}

A new runtimes section has been added with some targets. A list of the supported runtimes can be found in the RID catalog.

Restoring and building now gives us a new directory bin/Debug/netcoreapp1.0/win7-x64 with the following contents:














Which you can see clearly this time now contains the dotnetcore.exe that kicks things off. However, this does not look like a native exe to me and more disappointingly the ubuntu version seems to have been missed completely.

Forgetting the cross-compile for the moment, certainly there should be a native build and then I could go through some hassle, setup a Linux machine and build on that. I took a look for some other examples. This post gives some very positive early looks at dotnet core and native compiling. This time using the dotnet compile --native option. It seems that this is from a very early pre-release version and the compile option has now been scrubbed in favour of dotnet build --native.

Digging a bit more it seems that both cross-compiling and indeed native compiling has been disabled in RC2. It seems I'm not the only one to have struggled with this and taken a look also at using the --Native switch on the build option.

Not wanting to be deterred completely I decided to uninstall RC2 as I had Preview2 and install an earlier version. Getting hold of Preview1 was next to impossible. I found how to get dotnet core versions with PowerShell. After faffing around a bit I was blocked due to scripting being unhelpfull disabled on the machine I was using. Undeterred I did finally find a Preview1 msi, installed, tested and once again the native build option was not available.

As a last stab, I did download the dotnet cli source code and start kicking off a compile which didn't work directly in Visual Studio and I'd pretty much lost the energy at that point to have another round with PowerShell on another machine.

This is most frustrating as although I could probably fight this through to get it working on Docker, I'm also interested if some of the backend functions I have could be easily used as AWS Lambdas. This would allow me to reuse some existing C# code just when needed without the costs of porting. It might not be the fastest in run-time, but certainly would be beneficial in terms of reuse. This post got it to work, presumably with a build where the native option was still available.

Well, hopefully native and cross-compile will come back as I think for now I'll need to put this to one side until RC2 is out the door.

dotnet core - Getting Started

With the rise of interest in Docker containerisation and the benefits that it provides particularly in Cloud deployments I've been keeping an eye on how Microsoft are evolving their strategy to join the party. From the early initial demonstration at the 2015 Keynote (video) to the more recent inclusion of Docker properly as a first class citizen on Windows shown at DockerCon.

The parallel path to this in Nadella's open-source drive after the acquisiton of Xamarin earlier this year is the dotnet core initiative which brings dotnet more easily onto Linux and macOS. This brings Mono into the fold and at the same time by making it open-source keeps it with it's most active community.

As I have a bunch of existing C# code from various projects bringing some of that over to Linux and Docker has some interesting opportunities and potential to reuse some of the components easily in non-Azure Cloud deployments.

Getting started was no problem at all. As most of the C# has originated from Windows, I'm starting on that platform and will look later at how the tools will work on a Mac later. The target platform is for Linux - Ubuntu or CentOS. The tools are easily downloaded for the platform of choice here

Once installed it's a simple matter to make a basic Hello World example:


using System;

namespace HelloWorldSample 
{
 public static class Program 
 {
  public static void Main() 
  {
   Console.WriteLine("Hello World!");
  }
 }
}

This should be saved as program.cs in a project directory. The dotnet toolset then requires the  the basic project settings, such as the compile target and dependencies to be configured in a simple json file such as below:


{
  "version": "1.0.0-*",
  "buildOptions": {
    "debugType": "portable",
    "emitEntryPoint": true
  },
  "dependencies": {},
  "frameworks": {
    "netcoreapp1.0": {
      "dependencies": {
        "Microsoft.NETCore.App": {
          "type": "platform",
          "version": "1.0.0"
        }
      },
      "imports": "dnxcore50"
    }
  }
}

This can be saved in the same project directory as project.json for example.

It's now time to use the dotnet command line interface (CLI) to build the project. Run up a CMD prompt and within the directory you'll need to first pull down any dependency packages (like the referenced dlls in Visual Studio) that are used in the project. One of the key functions of the dotnet tooling is that it includes the NuGet package manager. Try the following:

dotnet restore

This will go away and download any packages you need and will also create a file project.json.lock which is used to track any of your project changes.

Now we can build and run the sample using dotnet run. This goes through a compile (build) stage and then runs the application straight after:

Neat! Using the run command again will recognise that the application has already been build and will just run it:

You'll also see that typical bin and obj directories have been created within the application.

This is great. The code is portable and the same kind of operation should run very nicely on my Mac as well.

There's a really good tutorial here that runs through much of the same example which we'll explore a bit further in the next post.




Monday 15 August 2016

AWS Lambda - more than ImageMagick and configuring API Gateway

We've looked at getting setup a basic Lambda function that returns a scaled version of a file held on S3. Not very exciting or dynamic. It's great that ImageMagick is available as a package for Lambda and this means that some pretty basic Lambdas can be setup just using the inline code page on AWS to get things kicked off. For anything more ImageMagick doesn't quite give us enough. This also pushes us on to the next step of using Lambdas for a bit more functionality.

We're going to take another baby step and look at providing a slightly improved function to take a cutout tile from the image used prevsiously. Imagine we have an image that is composed of a tile of 5 rows and 10 columns of images and give some consideration to a little border region between each. What we want to do is return one of those tiles in the function call.

ImageMagick no longer has enough functionality to help, so we're going to use GraphicsMagick to provide the additional functionality. The code example is shown below:


var im = require("imagemagick");
var fs = require("fs");
var gm = require('gm').subClass({
    imageMagick: true
});
var os = require('os');

var AWS = require('aws-sdk');
AWS.config.region = 'eu-west-1';

exports.handler = function(event, context) {
    
    var params = {Bucket: 'bucket', Key: 'image.png'};
    var s3 = new AWS.S3();
                
    console.log(params);


    s3.getObject(params, function(err, data) {
        if (err) 
            console.log("ERROR", err, err.stack); // an error occurred
        else     
            console.log("SUCCESS", data);         // successful response
        
            var resizedFile = os.tmpdir() + '/' + Math.round(Date.now() * Math.random() * 10000);
        
            console.log("filename", resizedFile); 
            
            
            gm(data.Body).size(function(err, size) {
                
                console.log("Size", size); 
                
                var num = 10;

                var x = size.width / num;
                var y = size.height / 5;
            
                var col = 0;
                var row = 0;
            
                
                gm(data.Body)
                    .crop(x-14, y-14,
                            (col * x)+7,(row * y)+7)
                    .write(resizedFile, function(err) {
                    
                        console.log("resized", resizedFile); 
                        
                        if (err) {
                            console.log("Error", err); 
                            context.fail(err);
                        }
                        else {
                            var resultImgBase64 = new Buffer(fs.readFileSync(resizedFile)).toString('base64');
                            context.succeed({ "img" : resultImgBase64 });
                        }
                        
                    });
                });
                
        });
    
};

In this example we're still not sending in any parameters and we're looking to get back the (0,0) referenced tile in the top-left corner. Using the below image, this would be the blue circle:


If you put this Lambda code directly into the inline editor and try to call it you'll get an error as it cannot find gm, GraphicsMagick. To add it in unfortunately we need to start packaging up our Lambda. So, first step, you need to download the gm package, which is easily done with npm as follows:


npm install gm

If you do this in the root of your project you'll get a folder called nod_modules containing a bunch of files. Now, create a zip file along with your Node,js function above. The contents inside should look like this:





Now in your Lambda you will need to select your Zip file, upload it (by clicking Save - yep, a bit unintuitive) and you'll need to configure the Configuration tab of your Lambda. The Handler will need to be set to call the Node.js filename and the function that you export, so in this case the filename is called 'getImage' and the handler exported has been called 'handler'.







Great. Now things should run smoothly and you'll get back a tile (hopefully the round circle) from the image. To make this a bit more useful it would be good to now pass in the row + column information to be able to choose the tile, so let's do that.

Adjusting the Lambda code we can get parameter information in via the 'event' information passed into the handler function. Let's add a line to log this to the console.


exports.handler = function(event, context) {
    
    console.log('Received event:', JSON.stringify(event, null, 2));

Now configure your test event from the Lambda console - Actions/Configure Test Event to something like this:














And run a test execution, you should get something like this in the log (Click on the Monitoring tab in Lambda and scroll down):












Great, we're calling this from within AWS as a test and getting back the row & column information. It's now simple enough to get these values to set the row & column variables. For the sake of simplicity I'll leave out error handling in case events are not present and if the values are not correct, that's standard code work.


     var col = event.col;
     var row = event.row;

Run again and take a look at the base64 string in the JSON return result 'img'.

So this is getting the parameter information from within AWS but we want to specify it from our simple HTML page, so we need to configure API Gateway to pass the parameters in the HTTP GET method in the traditional kind of 'getImage?row=1&col=1' kind of formulation.

Navigate to the Lambda Triggers tab and click on the 'GET' method which will lead you to the API Gateway configuration.  Choose your API, click on Resources and choose your function to give you this view:














Now click on te Integration Request and scroll down and open out the Body Mapping Templates:


Add in a new Content-Type and paste in the template above which maps the input parameters to the event object.

This should now be ready to go, so a simple test in a browser first and then tweak your HTML code to have something like this:


    var response = $http.get('https://xxxxxx.execute-api.eu-west-1.amazonaws.com/prod/getImage2?row=2&col=2');

It's simple exercise now to play around with your JavaScript in the HTML to select the tile you want to get.