Thursday, 21 November 2013

Promises In CoffeeScript

Banner: Promises In CoffeeScript

Previously we discussed the benefits of the ‘Comb’ library. We will add to our knowledge here, by discussing ‘Promises’. We are not speaking of promises that are too good to be true! Rather that of a client system willing to fulfill or resolve an operation asynchronously, hence a ‘Promise’. We want to use ‘Promises’, to wrap our code sections, so that we don’t cause blocking in error handling. You will find that your code is also cleaner.

In these examples we will be using CoffeeScript, so we can get a flavor of that emerging language that compiles into JavaScript as well. You will also need to install require.js through the NPM. One tricky thing to remember about CS (CoffeeScript) is that even a tab or a space can cause the compiler to resolve something differently. Here ‘Promise’ has three supported methods that we will cover: errback(), callback(), and resolve().

# initialize the values
fs = require("fs")
comb = require("comb")

# Here we create function to read a file
readFile = (file, encoding) ->
 # Let’s wrap it with a ‘Promise’
 ret = new comb.Promise()
 fs.readFile(file, encoding or "utf8", (err) ->
  if (err)
   # Here is the fail codepath, something was wrong
   # Success! Resolve the promise
   ret.callback(comb.argsToArray(arguments, 1)))

 # Return the promise object.

# Creates an errorHandler function, that writes to a log.
errorHandler = ->
 console.log("Error handler")

How do we use this? The beauty of ‘Promises’, is that you separate the success and fail code paths with a ‘then’ function. See this simple call:

readFile("myFile.txt").then ((text) ->
), errorHandler

Now let’s make this even easier: If we use the ‘resolve’ method, we do not have to handle the success and fail code paths separately. Now the ‘callback’ and ‘errback’ methods, are wrapped in a single ‘resolve’ method.

fs = require("fs")
comb = require("comb")

readFile = (file, encoding) ->
 ret = new comb.Promise()
 fs.readFile(file, encoding or "utf8", ret.resolve.bind(ret))
 # Return the promise object.


This version of the ‘readFile’ function does the exact same work as the above, but with a lot cleaner code! Now let’s build on this by adding a ‘listener’. Your purpose, is to listen for the resolution/fulfillment of a ‘promise’. We are looking for the task to be completed, or a failure. Using the ‘readFile’ function from above, the following is listening for a successful: You guessed it, a successful file-read.

readFilePromise = readFile("myFile.txt")

# ‘then’, Remember carries ‘callback’ and ‘errback’, or success and failure.
readFilePromise.then ((file) ->
), (err) ->

# Here we are ignoring the errback, it is optional.
readFilePromise.then (file) ->

Now let’s perform an action on the file after we listen for it to be read. In this sample, let’s convert the contents to uppercase. We can pass the new ‘Promise’ into the ‘Then’ method.

readAndConvertToUppercase = (file, encoding) ->
 ret = new comb.Promise()
 readFile(file).then ((data) ->
       # This is the errback, but here we can just pass, ‘ret’

readAndConvertToUppercase("myFile.txt").then ((file) ->
), (err) ->

Enjoy the full sample:

# initialize the values
fs = require("fs")
comb = require("comb")

# read a file
readFile = (file, encoding) ->
 ret = new comb.Promise()
 fs.readFile(file, encoding or "utf8", ret.resolve.bind(ret))

readAndConvertToUppercase = (file, encoding) ->
 ret = new comb.Promise()
 readFile(file).then ((data) ->
    # This is the errback, but here we can just pass, ‘ret’

readAndConvertToUppercase("myFile.txt").then ((file) ->
), (err) ->
... continue reading!

Monday, 4 November 2013

Web Workers

HTML 5: Web Worker Basics

So when working with javascript you may have not realised that it is single threaded.... So Brian, what's the big deal?

 To explain a little; For those new to threads. You can simply think of a thread as the normal flow of execution you have been working with. You start, call a function, do some work and so on. Its all very step by step.
So threads are the same basic idea, only they can talk to each other allowing for more work to be done in parallel.

The program starts execution as normal and then start sub-tasks in threads that will go off and do there work, then can give back the results of there labor. This is how people leverage the strength of multi-core computers.
However it will take a little shift in how you think of problems. ; )

In this post Im going to look at 3 areas
  1. How you setup and use a web worker
  2. Inlineing your web worker into one file
  3. Web workers in older browsers that don't support them

1. How you setup and use a web worker

Lets get into the meat and bones of an example:

< !DOCTYPE html>
< html>
< head>  

< body>

//load the WebWorker file
var worker = new Worker('webworker1.js');

//create the function to handle the response
worker.onmessage = function(response) {
  document.getElementById("jsOutput").innerHTML = "Received: " +;

// Start the worker.
worker.postMessage(''); // *Note: You must pass a string or Json object

self.onmessage = function(event) {
   self.postMessage('Message from WebWorker');

"Received: Message from WebWorker"

That it. Very easy. Just put the code you what to run into another js file and off you go.

You should be aware that Due to their multi-threaded behavior, web workers only has access to a subset of JavaScript's features:
  • The navigator object
  • The location object (read-only)
  • XMLHttpRequest
  • setTimeout()/clearTimeout() and setInterval()/clearInterval()
  • The Application Cache
  • Importing external scripts using the importScripts() method
  • Spawng other web workers

Workers do NOT have access to:

  • The DOM (it's not thread-safe)
  • The window object
  • The document object
  • The parent object

2. Inlineing your web worker into one file

let's take the above and combin it all into one file


The first thing we need is to move webworker1.js inside a script element on our page but we must add an id to the element so we can reference it. I used the same name was I used when it was in a file "webworker1"

Now we want to swap out the include with the contents of the source.js


You should note the first two line.

var blob = new Blob([document.querySelector('#webworker1').textContent]);
This grates a reference to are code block that we want our web worker to work on.

var worker = new Worker(window.URL.createObjectURL(blob));
and here we create our web worker based on that code block.

3. Web workers in older browsers

For this you can using web-workers-fallback This library provides basic compatibility for the html5 web worker api in browsers that don't support it.
To use it, you only need include Worker.js, and everything should work out of the box.

*As usual you should read the Limitations section and test in a browser that doesn't support web workers ;)

For more information on web workers see the great Mozilla Developer Network resource: Using web workers
... continue reading!

Friday, 25 October 2013

Welcome to Threejs

Continuing with my research into WebGL libraries this week I was looking at ThreeJs and recreating the same sample as in my previous example with BabylonJs.

With Threejs there are 2 ways to create your viewport.
  1. Is to dynamically create a canvas element, as above. This is how you can add you rendered seen in to a webpage as normal.
    *the size are there cameras is in pixels in the javascript source
  2. To feel the entire page
In this example I will be using the first approach, as the majority of examples you will find online references the fullscreen mode. To please the viewport on to the page, we first need a div where we want our viewport placed

Download the compressed version of the library at and put it in the same folder as your files.

To help keep our code clean you will now create a "myCode.js". I hope you noticed that this file is the one referenced in the above section.

In side "myCode.js" we will add the following:
As we are rendering to a element on the page we need to define the width and height. These sizes are used to in the example to match the camera perspective with the rendered viewport.

var canvasWidth = 578, canvasHeight = 200;
var camera = new THREE.PerspectiveCamera(90, canvasWidth  / canvasHeight, 0.1, 1000);
    camera.position.z = 3;

Now we create a render and attach it to the viewport.

var renderer = new THREE.WebGLRenderer({antialias:true});
renderer.setSize(canvasWidth , canvasHeight);

var viewport= document.getElementById( 'renderCanvas' );
viewport.appendChild( renderer.domElement );

Now we create our 2 world objects

//THREE.TorusGeometry(radius, tube, segmentsR, segmentsT)
var torus = new THREE.TorusGeometry( 1.5, 0.35, 16, 32 );
// THREE.SphereGeometry(radius, segmentsWidth, segmentsHeight)
var sphere = new THREE.SphereGeometry( 0.3, 16, 16 );

We need to define a material to apply to our objects. Phong shading will give a nice shine.

var material = new THREE.MeshPhongMaterial({
        // light
        specular: '#ffffff',
        // intermediate
        color: '#ffffff',
        // dark
        emissive: '#000000',
        shininess: 100 

 var torusMesh = new THREE.Mesh(torus, material);
 var sphereMesh = new THREE.Mesh(sphere, material);

Create a light sources in order to give the correct light and dark shading on the objects

var directionalLight = new THREE.DirectionalLight( 0xffffff, 1);
directionalLight.position.set( 0, 10, 10 ); 

let's create our scene and add the element to it.

 var scene = new THREE.Scene();
  scene.add( directionalLight );

To add a bit of flavour let's animate the torus. This will be in the function that will be run every frame.

  var render = function () {
      torusMesh.rotation.x += 0.01;
      torusMesh.rotation.y += 0.015;
      renderer.render(scene, camera);

let's start the animation.


There you go done and dusted.
If you have any thoughts or feedback, let me know by leaving me a comment.
... continue reading!

Sunday, 13 October 2013

Welcome to Babylonjs

Welcome to the new world of 3D in your browser!
Today I'm going to show you just how easy it can be with a help from BabylonJs. BabylonJs is a high level wrapper on top of WebGL. With libraries like this, it is actually very easy to make things like the above scene.

The first thing we will need is some css for where we want the rendered image to be drawn.

       html, body {
            width: 100%;
            height: 100%;
            padding: 0;
            margin: 0;
            overflow: hidden;
            width: 100%;
            height: 100%;

Now lets put the canvas element inside the body. Canvas Is a new element introduced in HTML5, specifically for rendering graphics programmatically. So This is our view port ^_^

Download the compressed version of the library at and put it in the same folder as your files.

To help keep our code clean you will now create a "myCode.js". I hope you noticed that this file is the one referenced in the above section.

In side "myCode.js" we will add the following:

function babylon(){
 //get a reference to the canvas element on the page
 var canvas = document.getElementById("canvasView");
 //create an instance of the rendering engine
 var engine = new BABYLON.Engine(canvas, true);
 //create an instance of a Scene.
 //This is used to house our camera, lights and shapes
 var scene = new BABYLON.Scene(engine);

Now that we have a Scene, we need to specify a camera

 //a Camera, so the renderer knows what to show us.
 var camera = new BABYLON.FreeCamera("Camera", new BABYLON.Vector3(0, 0, -10), scene);

Lets add some shapes and maybe a light.

 //Parameters are: name, number of segments (highly detailed or not), size, scene to attach the mesh. Beware to adapt the number of segments to the size of your mesh ;)
 var sphere = BABYLON.Mesh.CreateSphere("Sphere", 10.0, 1.0, scene);
 var torus = BABYLON.Mesh.CreateTorus("torus", 5, 1, 20, scene, false);
 //a light source to make things look pretty
 var light0 = new BABYLON.PointLight("Omni0", new BABYLON.Vector3(0, 100, 100), scene);

Now here's the trick to getting the shapes to move. We will set a function to be called just before each new frame is drawn to the screen. it's pretty self explanatory ;)

 var twist = 0;
 scene.beforeRender = function() {
  torus.rotation.z = twist;
  torus.rotation.x = twist;
  torus.rotation.y = twist;
  twist += 0.01;

Now we will get the ball rolling by creating a render function to be looped over.
  // Render loop
  var renderLoop = function () {
   // Start new frame
   // process scene 
   //NOTE: at this point the "beforeRender" will be called
   // draw

   // Need this to render the next frame
  //Need this to call the renderLoop for the 1st time

One final this we should do is to check if the browser supports WebGL
}// END OF babylon funciton

//Check it there browser is supported
if (BABYLON.Engine.isSupported()) {
 alert("Sorry! WebGL is to cool for your Browser")
and you are done!
Have fun :D
... continue reading!

Saturday, 28 September 2013

"Comb" Library: Logging (2 of 2)

Hi! Before continuing note this is the second part of my "Comb" logging guide. For part one see: "Comb" Library: Logging. Else, on we go.

In this part let's cover the configuration of the logging system by calling the logging configuration function comb.logger.configure that will state what level should be stored/outputted to where.

Before this we need to know about Appenders. Appenders are the end points that can be attached to logers.

Multiple appenders are already included as part of the Comb Library
  • FileAppender - log it to a file
  • RollingFileAppender - log it to a file up to a customizable size then create a new one.
  • JSONAppender - write it out as JSON to a file.
  • ConsoleAppender- log it to the consol

To declare a appender and its target, it would look something like this.
var myLogger = comb.logger("my.logger")
    .addAppender("FileAppender", {file:'/var/log/my.log'})
    .addAppender("RollingFileAppender", {file:'/var/log/myRolling.log'})
    .addAppender("JSONAppender", {file:'/var/log/myJson.log'});

Level class used to describe logging levels. The levels determine what types of events are logged to the appenders for example the if Level.ALL is used then all event will be logged, however if Level.INFO was used then ONLY INFO, WARN, ERROR, and FATAL events will be logged. To turn off logging for a logger use Level.OFF.
//the loggers you create now will have a ConsoleAppender
comb.logger.configure(comb.logger.appender("FileAppender", {file : '/var/log/my.log'}));
//loggers will have a FileAppender

The Cool part(Nerd time ;) Let's create a configuration file.
We configure by passing a block JSON to the "configure" function.

Configuration object layout:
  • "name space"(object attribute) -> Object
    • level(object attribute) -> String
    • appenders(Array) -> objects
      • name -> String
      • level -> String
      • type -> String
      • file -> String
      • pattern -> String
      • overwrite -> String

Example from the comb site:
        "my.logger": {
            level: "INFO",
            appenders: [{
                //default file appender
                type: "FileAppender",
                file: "/var/log/myApp.log",
            }, {
                //default JSON appender
                type: "JSONAppender",
                file: "/var/log/myApp.json",
            }, {
                type: "FileAppender",
                //override default patter
                pattern: "{[EEEE, MMMM dd, yyyy h:m a]timeStamp} {[5]level} {[- 5]levelName} {[-20]name} : {message}",
                //location of my log file
                file: "/var/log/myApp-errors.log",
                //override name so it will get added to the log
                name: "errorFileAppender",
                //overwrite each time
                overwrite: true,
                //explicity set the appender to only accept errors
                level: "ERROR"
            }, {
                type: "JSONAppender",
                file: "/var/log/myApp-error.json",
                //explicity set the appender to only accept errors
                level: "ERROR"

You can also log directly to levels with
    var logger = comb.logger("logger");
    logger.log("info", "my message");
    // or if it is one of the default types"my message");

Here the list of pre-supported functions
    logger.debug("debug message"); logger.trace("trace message");"info message"); logger.warn("warn message"); logger.error("error message"); logger.fatal("fatal message");
... continue reading!

Friday, 27 September 2013

"Comb" Library: Logging (1 of 2)

Continuing with my overview of the comb library Lets taking a look at the logging functions available.

logging is critical in your applications, not just for errors but in order to get a good understanding of how your application is operating in the wild/production.

So what am I going to cover in the post.. well let see:

  • Logger inheritance through name spaces
  • Predefined level level definition along with the ability to define your own.
Logger inheritance through name spaces.. sample code anyone?

Lets load the comb library
var comb = require('comb'); //load the comb lib

Lets load a set of different elements for logging in different aspects
var logger_sys = comb.logger("sys");
var logger_user = comb.logger("user");
var logger_sys_logger = comb.logger("sys.logger");
var logger_user_logger = comb.logger("user.logger");

Note that the "." dot denotes the separation of "name space" levels

Next here's a simple function just to print out the current level attribute for each of our elements for logging.
function print(){
 console.log();//lets skip a line for readability

let's set a description for each of the logging levels
console.log(">> lets set what the defalts level looks like");

console.log(">> lets set sys and its child to 'DEBUG'");
logger_sys.level = 'DEBUG';

console.log(">> lets set user and its child to 'INFO'");
logger_user.level = 'INFO';

console.log(">> Now we will ONLY set sys.logger to 'WARN'");
logger_sys_logger.level = 'WARN';

examples of inheritance within logging
console.log('>> If will create a sub logger');
console.log('>> It will inherit the level from its parent');

So what is the point in this??

Ok, let's take you have "INFO" and "ERROR" levels(for a full list of predefined logging levels see comb.logging.Level) So we can call a logging instance something inspired like "mypack.myclass.note" and set the level to INFO and another with "mypack.myclass.problem" to "ERROR".

Something important to note is that if you used the same "name space" name in the same or a different file, it will return the same global instants regardless.

Continue to part 2: Configurable with files OR programatically
... continue reading!

Sunday, 15 September 2013

"Comb" Library: Object Oriented

The comb library is a very useful set of utilities that will help in your javascript projects and especially with Node applications.

In this set of quick overviews I am going to give a brief run down of the different areas covered in the library:
  1. Object Oriented
  2. Logging
  3. Utilities
  4. Flow control
*But before going on, I am not connected to this project. I only found it helpful.. On with the show!

Object Oriented: As javascript does not support the classical object orientated paradigm. Comb provides a function define that takes an object with an attribute named instance or static. So this attribute is your class definition.

let's roll out a short example:
Create our base class

var Mammal = comb.define({
 instance: {
  _type: "mammal",
  _sound: " *** ",
  constructor: function (options) {
   options = options || {};
   var type = options.type,
    sound = options.mammal;
   type && (this._type = type);
   sound && (this._sound = sound);
  speak: function () {
   return "A mammal of type " + this._type;

But wait Brian didn't you say there was a static attribute as well.
var Mammal = comb.define({
 instance: {
 static: {
  DEFAULT_SOUND: " *** ",
  soundOff: function () {
   return "Im a mammal!!";

let's now create another class to inherit from our base class
var Wolf = Mammal.extend({
    instance: {
        _type: "wolf",
        _sound: "howl",
        speak: function () {
            return this._super(arguments) + " that " + this._sound + "s";
        howl: function () {

let's take a look at this in action
var myWolf = new Wolf();
myWolf.howl() // "Hoooowl!"
myWolf.speak();// "A mammal of type wolf that howls"

For more reading here's the official documentation
... continue reading!

Sunday, 1 September 2013

Js Arrays: Functions

Okay so let's run through some javascript arrays.
I'm going to try and cover some of the more useful functions in array manipulation.
too start out we're going to use this five element array.

Our array

var v = ["a","b","c", "d","e"];
console.log(v.length); // 5

Index 0 Index 1 Index 2 Index 3 Index 4
a b c d e

v[9] = "j";
console.log(v[9]); // j
console.log(v[v.length - 1]); // j
console.log(v.length); // 10

Index 0 Index 1 Index 2 Index 3 Index 4 5 6 7 8 Index 9
a b c d e undefined j


~ works with the END of the array


//push() appends one or more items to the end of the array
console.log(v.length); // 6
console.log(v[5]); // f

And for your info push could also be achieved by
v[v.length] = "f"

Index 0 Index 1 Index 2 Index 3 Index 4 Index 5
a b c d e f

To add multiple elements


console.log(v.length); // 6
console.log(v[v.length]); // f

v.pop(); // f

console.log(v.length); // 5
console.log(v[v.length]); // e

Index 0 Index 1 Index 2 Index 3 Index 4
a b c d e

//pop() on an empty array returns undefined


~ works with the START of the array


v.unshift("f"); //prepends one or more items to the start of the array
console.log(v.length); // 6
console.log(v[0]); // f

Index 0 Index 1 Index 2 Index 3 Index 4 Index 5
f a b c d e


console.log(v.length); // 6
console.log(v[0]); // f

//shift() returns the first item from the array and shrinks it
v.shift(); // f

console.log(v.length); // 5
console.log(v[0]); // a

Index 0 Index 1 Index 2 Index 3 Index 4
a b c d e

Merging arrays


is used to join two or more arrays.
var tail = ["x","y","z"];
var num = ["1","2","3"];

//concat() returns an array of the joined arrays
var v2 = v.concat(tail, num); //["a","b","c", "d","e","x","y","z","1","2","3"]

console.log(v.length); // 5
console.log(v2.length); // 11

0 1 2 3 4 5 6 7 8 9 10
a b c d e x y z 1 2 3

Well I think that's me done for a while.
Till next time kids.
... continue reading!

Tuesday, 20 August 2013

Error: Cannot find module

I ran into this little node.js problem today. 
Error: Cannot find module 'autoloader'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (c:\node\web\auto\test.js:1:63)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Function.Module.runMain (module.js:497:10)

Was trying to use a package call autoloader. It installed fine but when I would try and run my node.js code. Bang, would get the above error.

This turned out to be a noob mistake on my part.
Fix: you can install a node package from any directory but to have it seen by node.js you need to install it while in the node directory.

  1. Navigate to wear your node executable is.
  2. Install your package as normal.. Done!

While I'm here I might as well talk a little bit more about installing packages(more commonly known as libraries). There are 3 things to know.
  • What: So node.js has a very minimalist belief where anything additionally that you need can just be installed. To this end there is the node package manager(npm). This is the best source to find and install packages for pretty much everything you could imagine to do with node.js/javascript. 
  • Where: Now, as with my above problem. If you run something like  npm install autoloader  it will create a directory called node_modules(if it does not exist already), download the library autoloader and install it into a subdirectory under node_modules. This is great but remember you need to be in the node.js directory so you ;node_modules are all in the same place so the node.js execute can find them. (This is also referred to as installing locally)
  • Global: There is an additional parameter in particular -g that can allow you to use the libraries from anyway where via your terminal. It does this by adding a path to the package in your environmental variables. The above autoloader example would then look like   npm install autoloader -g  

... continue reading!

Thursday, 15 August 2013

Basic web storage

Node.js + Coffee + mongoDB

Good morning boys and girls, today I would like to share with you a little something I've been working on. 

So I set out to build a web service that would 
  1. Read in a POST request on a Node.js server and save it to a mongo database 
  2. When a GET request comes in, return all the posted data
    (this is the normal type of message you receive from a browser. i.e. get me this page/image/thing..).
and for good measure let's make sure we're using coffeescript's class ability.

To get started you will need to install the mongoDB server.

There are very good step-by-step tutorials for all major platforms on the mongoDB site. so once you Install MongoDB. Fire it up to make sure everything is working fine.
navigate to wear the mongoDB executable is
cd /mongodb/bin

Now start you mongoDB server *By default, MongoDB stores data in the /data/db directory.

Thu Aug 15 13:21:05.023 [initandlisten] MongoDB starting : pid=7444 port=27017 dbpath=\data\db\ 64-bit host=blackbolt
Thu Aug 15 13:21:05.025 [initandlisten] db version v2.4.4
Thu Aug 15 13:21:05.025 [initandlisten] git version: 4ec1fb96702c9d4c57b1e06dd34eb73a16e407d2
Thu Aug 15 13:21:05.026 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49
Thu Aug 15 13:21:05.027 [initandlisten] allocator: system
Thu Aug 15 13:21:05.028 [initandlisten] options: {}
Thu Aug 15 13:21:05.079 [initandlisten] journal dir=\data\db\journal
Thu Aug 15 13:21:05.081 [initandlisten] recover begin
Thu Aug 15 13:21:05.082 [initandlisten] recover lsn: 15263608
Thu Aug 15 13:21:05.083 [initandlisten] recover \data\db\journal\j._0
Thu Aug 15 13:21:05.085 [initandlisten] recover skipping application of section
seq:0 < lsn:15263608
Thu Aug 15 13:21:05.086 [initandlisten] recover skipping application of section
Thu Aug 15 13:21:05.164 [initandlisten] recover cleaning up
Thu Aug 15 13:21:05.165 [initandlisten] removeJournalFiles
Thu Aug 15 13:21:05.167 [initandlisten] recover done
Thu Aug 15 13:21:05.332 [initandlisten] waiting for connections on port 27017
Thu Aug 15 13:21:05.332 [websvr] admin web console waiting for connections on port 28017

We can now just leave this running.

Now in a new terminal window we install the mongoDB driver for Node.js
npm install mongodb

So here we are going to have 2 files "" and ""

Here we have our class and the constructor. The constructors doing two things
  1. The 'response' being passed in is prefixed with '@' so it automatically becomes an attribute of the class.
  2. Creating our mongoDB connection

class myMongo
 constructor: (@response)->
  databaseUrl = "mydb"
  collections = ["randomValues"]
  @db = require("mongojs").connect(databaseUrl, collections)

Here we create the save function that is used for the post messages.
It's split into two functions, so "save" initiates the writing to database and "_saveCallBack" after the values have been stored.
*note: there 'saveCallBack' function starts with an underscore. This is to denote that the function is private

 save: (args) =>, @_saveCallBack)
 _saveCallBack: (err, saved) =>
  if err?
   console.log("Saved #{JSON.stringify(saved)}")
   @response.write("will be saved")

Here is a similar setup to "save" in that it has two functions but of course we are reading out the information that has been stored by the POST messages. You should know that line 13 is where the magic happens as it loops thru the return values outputting each on a new line("\n")

 find: =>
  @db.randomValues.find {}, @_findCallBack
 _findCallBack: (err, values) =>
  console.log "#{values.length} Requested"
  if err? 
   console.log err
  else if values.length is 0
   @response.write "No values found"
   @response.write JSON.stringify(val)+"\n" for val in values

Finally we use "export" to allow our mongoDB manager class to be used with other files.
module.exports = myMongo

Very simple to start off. I'm bringing in the HTTP module and the mongoDB source that will handle the reading and writing of our values.
http = require "http"
myMongo = require "./mongo"

Here's our request functional that will be run every time there is a connection is made.

There are four main thing happening here
  1. Set our HTTP header
  2. Created an instance of our mongoDB manager (
  3. Check if it a POST message and pass the values to be saved
  4. Else if it's a GET message and get the mongoDB manager to return all stored values

onRequest = (request, response) ->

 response.writeHead 200,
  "Content-Type": "text/plain"

 #pass in the 'response' object, so the mongoDB manager 
 #can use it to output the values on a GET   
 mongoConnet = new myMongo(response)

 if(request.method is 'POST')
  body = '';
  request.on 'data',  (data) ->
   body += data

  request.on 'end', () ->
   POST =  JSON.parse (body)

 else if(request.method is 'GET')

Here is where we build our server. you can see we're passing in the "onRequest" function and listening on PORT:8888.
O, and a little message to let us know our server is up and running.
server = http.createServer()
server.on("request", onRequest)

console.log "Server up and Ready to eat"

Now here comes two commands and you can run them in any order and see what you get. :D

This first one is the POST message that will store information into our database.
curl -i -X POST -H "Content-Type: application/json" -d '{"name":"brian","code":"sandwich"}' localhost:8888

Next we have the GET message that will retrieve our stored values.
curl -i localhost:8888

a copy of both source files is available on: GITHUB

... continue reading!

Tuesday, 6 August 2013

a nodes journey into the amazon

Node.js + Coffee + Amazon

Here I'm going to run through hosting your Node server on the Elastic Beanstalk

A super quick intro to Elastic Beanstalk: 
Amazon's Elastic Beanstalk is a deployment service that allows you to encapsulate different amazon services to provide a specific server and resource configuration based on your requirements. + There is no extra cost for this service. To find out more read Amazon's AWS Elastic Beanstalk Components

*Note: Beanstalk refers to each service collection as an "Application".

In this "Application", beanstalk will pull in 
Lets get started! ^_^

I am going to uses a directory called "aws" and I will use my Git basics as my server code. This is important as we will be using git to upload our code to beanstalk! In your command-line, go to this "awsdirectory. but we will also need a "package.json" file to tell our node server about our coffee source.

File: package.json
"name": "AmazonCoffee",
"version": "0.0.1",
"description": "An example of a nofe.js running CoffeeScript source",
"homepage": "",
"scripts": {
   "start": "coffee"
"dependencies": {
    "coffee-script": "*"

Console ~ Lets stage our files. 
 git add . 

Next we commit our staged file.
 git commit -m "added configuration file "package.json" for Node to run" 
You should get something like the following:
 [master (root-commit) 950b29badded configuration file "package.json" for Node to run
 1 file changed, 19 insertions(+)

Next comes the real juicy bit! Deploying to AWS Elastic Beanstalk

You will now need to download & install the Elastic Beanstalk Command Line Tool 

Once you have downloaded the zip file. Extract it to your node directory.

Next you will need to add to your systems environmental variables.

Console - Linux: *Remember to match the python folder version with the version of python that you have installed
 export PATH=$PATH:node/AWS-ElasticBeanstalk-CLI-2.5.1/eb/linux/python2.7
On windows you will need to add ";c:\node\AWS-ElasticBeanstalk-CLI-2.5.1\eb\windows\" to your PATH in your Environment Variables. A good step by step can be found at How to set the path and environment variables in Windows

Back in our server AWS folder lets run.

 eb init 
Next you will get:
 Enter your AWS Access Key ID: 
To get your key you can follow my Coffee and S3 tutorial.

With your ID and key in hand, enter your ID.
 Enter your AWS Secret Access Key: 
Now you can pick a region to setup you server
 Select an AWS Elastic Beanstalk service region. 
For me I picked 4) EU West.. just cos!

 Enter an AWS Elastic Beanstalk application name (auto-generated value is "aws"): 

 Enter an AWS Elastic Beanstalk environment name (auto-generated value is "aws-env"): 
Here you can just hit enter and it will use the defaults based on your working directory(highlighted in yellow). 

 Select a solution stack. 
 Available solution stacks are:  
 5) 32bit Amazon Linux running Node.js 
For this I went with option 5. You could pick 6, if you want a 64bit version
Next you will be asked what type of "environment" you want?
 Select an environment type.
 Available environment types are:
 1) LoadBalanced
 2) SingleInstance 

You're best off picking 2) 'Single Instance' as you will only need to 'Load Balanced' with a live site.
 Create an RDS DB Instance? [y/n]: 
We don't need a database right now, so "n"
Next pick a profile.
 Attach an instance profile
1) [Create a default instance profile]
2) [Other instance profile]
or hit enter and lets go with the default. 1

* You can change you Beanstalk configuration, by running the init command again.
For each setting you can just hit Enter to use the previous settings.

lets deploy our server ^_^

 eb start 
 Starting application "aws".
 Would you like to deploy the latest Git commit to your  environment? [y/n]: 
Lets go with "y".. This will take a while.. but you should be getting updates while(Really!) its deploying.

After it's done you'll be given a URL to access your node server. 
 Application is available at " ...". 
If you have any problems let me know ;)

... continue reading!

Sunday, 4 August 2013

Git basics

Here I'm going to run through the very basics of getting started with Git. Simply Git is used to store our server code. It is a LOT more powerful than that, but every needs to start with baby steps.

First step is to download/install the latest version of Git on your machine. 

Now I am going to build on my  node.js/coffee example

Once you have the source running. I want you to point your terminal to your directory where you have the coffee source saved.


 git init 
Your path should now have "(Master)" at the end, but we now need to add our server code into the newly created repository.

Terminal ~ This will stage all the files
 git add . 
 * You can think of staging like adding to a list of files that you are ready to commit.

Next we commit our staged file.

 git commit -m "First commit" 
You should get something like the following:
 [master (root-commit) 950b29b] First commit
 1 file changed, 19 insertions(+)
 create mode 100644 
Let do a quick test to make sure all is good we our server 

So far so good! but there is one small thing bugging me.. that console message when the server starts. lets make 2 small tweaks. We are going to print out the port number and make the port selection more dynamic by adding have an optional argument when starting the script to specify the port. Else check if there is a predefined port of servers to start on.

First we will read in the port number from the command line. For this we will need process.argv which is an array containing the command line arguments. The first element will be 'coffee', the second element will be the path to our file and the last element will be the port number. The second part is process.env.PORT this will try and pull a port number from the global environment variable.

Add at the top of the script

port = process.argv[2]
port ?= process.env.PORT
port ?= 80

Replace line 15 & 17 with the below !! don't forget the indentation !!

 http.createServer(onRequest).listen (port)

 console.log ("Server on port #{port} has started.")

The changes above will read in a port value, If one can't be found 8888 will be set as a default. The second part sets the port number and will output the number when the server is started.

 coffee 8889 
You should get something like the following:
Serveon port 8889 has started.

Now lets commit our newly modified file with the following two commands

Terminal ~ This will stage just the file
 git add 
 git commit -m "the port number can be passed as a command line argument and the port number will be displayed on terminal"
And that is for now.
... continue reading!

Thursday, 1 August 2013

node.js + coffeescript in a nutshell

Here's is a quick 101 on getting Node.js/CoffeeScript up on Ubuntu Server I'm using Ubuntu 13.10 that contains Node.js in its default repositories. *Note that this will not be the newest one. However it's the simplest way of getting started.

We just have to use the apt package manager. We should refresh our local list packages before installing:

sudo apt-get update

sudo apt-get install nodejs

This is all that you need to do to get set up with Node.js. You will also need to install npm(Node.js Package Manager).

sudo apt-get install npm

This will allow you to easily install modules and packages to use with Node.js.

Because of a conflict with another package, the executable from the Ubuntu repositories is called nodejs instead of node. It's just good to keep this in mind.

The last step is to install the CoffeeScript interpreter

npm install coffee-script

Now that we are setup. Here is a basic node HelloWorld server written in coffee. File:
http = require "http"
url = require "url"

start = () ->
 onRequest = (request, response) ->
  pathname = url.parse(request.url).pathname
  console.log ("Request for #{pathname}")
  response.writeHead (200),
  "Content-Type": "text/plain"

  response.write "Code Sandwich"
 http.createServer(onRequest).listen (8888)

 console.log ("Server has started.")

Ladies start your servers.


now check that this is all ok

... continue reading!

Coffee to S3

It's late, I have tron on in the background and am thinking of how to make the Grid a reality.. may not be something I can come up with tonight :/

..what else is on my mind...

I'm pushing things to Amazon's Simple Storage Service(S3) but would be good to automate them.

Coffee time! and I'm not just saying that because its late. ok, bad pun

Next we need to install the AWS SDK(more info on AWS SDK for node.js).


npm install aws-sdk
Now you will need to get your AWS access info

First login to your Amazon account at

Once logged in go to the top right, click on your name and then click "Security Credentials"

Next you will need to go to you Access Keys and click "Create New Root Key"

Next you will be prompted with the "Create Access Key". [This will invalidate your old key]
download the 'Key File' to access your newly generated key.

back to the editor and create two new files config.json and

file config.json

{ "accessKeyId": " AWSAccessKeyId goes here ", "secretAccessKey": " AWSSecretKey goes here ", "region": "us-west-2" }


#load the aws sdk
AWS = require('aws-sdk')

#load the keys to access your account
AWS.config.loadFromPath './config.json'

#lets create an object to connect to S3
s3 = new AWS.S3()

#As buckets names are shared across all account in a region
#Let create a random number so multiple people can run this example
ran = Math.random()

#call the createBucket function and add a file
 Bucket: "codemeasandwich#{ran}"
, ->
 params =
  Bucket: "codemeasandwich#{ran}"
  Key: "aFile"
  Body: "HelloWorld"

 s3.putObject params, (err, data) ->
  if err
   console.log err
   console.log data
   console.log "Successfully uploaded data to myBucket/myKey"

Now lets fire up the console and run our ""

now check that this is all ok

... continue reading!