Friday, 27 September 2013

"Comb" Library: Logging (1 of 2)

Continuing with my overview of the comb library Lets taking a look at the logging functions available.

logging is critical in your applications, not just for errors but in order to get a good understanding of how your application is operating in the wild/production.

So what am I going to cover in the post.. well let see:

  • Logger inheritance through name spaces
  • Predefined level level definition along with the ability to define your own.
Logger inheritance through name spaces.. sample code anyone?

Lets load the comb library
var comb = require('comb'); //load the comb lib

Lets load a set of different elements for logging in different aspects
var logger_sys = comb.logger("sys");
var logger_user = comb.logger("user");
var logger_sys_logger = comb.logger("sys.logger");
var logger_user_logger = comb.logger("user.logger");

Note that the "." dot denotes the separation of "name space" levels

Next here's a simple function just to print out the current level attribute for each of our elements for logging.
function print(){
 console.log("sys:"+logger_sys.level.name);
 console.log("user:"+logger_user.level.name);
 console.log("sysL:"+logger_sys_logger.level.name);
 console.log("userL:"+logger_user_logger.level.name);
 console.log();//lets skip a line for readability
}

let's set a description for each of the logging levels
console.log(">> lets set what the defalts level looks like");
print();

console.log(">> lets set sys and its child to 'DEBUG'");
logger_sys.level = 'DEBUG';
print();


console.log(">> lets set user and its child to 'INFO'");
logger_user.level = 'INFO';
print();

console.log(">> Now we will ONLY set sys.logger to 'WARN'");
logger_sys_logger.level = 'WARN';
print();

examples of inheritance within logging
console.log('>> If will create a sub logger');
console.log('>> It will inherit the level from its parent');
console.log(comb.logger('sys.logger.log').level.name);

So what is the point in this??

Ok, let's take you have "INFO" and "ERROR" levels(for a full list of predefined logging levels see comb.logging.Level) So we can call a logging instance something inspired like "mypack.myclass.note" and set the level to INFO and another with "mypack.myclass.problem" to "ERROR".

Something important to note is that if you used the same "name space" name in the same or a different file, it will return the same global instants regardless.


Continue to part 2: Configurable with files OR programatically
... continue reading!

Sunday, 15 September 2013

"Comb" Library: Object Oriented

The comb library is a very useful set of utilities that will help in your javascript projects and especially with Node applications.

In this set of quick overviews I am going to give a brief run down of the different areas covered in the library:
  1. Object Oriented
  2. Logging
  3. Utilities
  4. Flow control
*But before going on, I am not connected to this project. I only found it helpful.. On with the show!

Object Oriented: As javascript does not support the classical object orientated paradigm. Comb provides a function define that takes an object with an attribute named instance or static. So this attribute is your class definition.

let's roll out a short example:
Create our base class

var Mammal = comb.define({
 instance: {
  _type: "mammal",
  _sound: " *** ",
  constructor: function (options) {
   options = options || {};
   this._super(arguments);
   var type = options.type,
    sound = options.mammal;
   type && (this._type = type);
   sound && (this._sound = sound);
  },
  speak: function () {
   return "A mammal of type " + this._type;
  }
 }
});

But wait Brian didn't you say there was a static attribute as well.
var Mammal = comb.define({
 instance: {
    ...
 },
 static: {
  DEFAULT_SOUND: " *** ",
  soundOff: function () {
   return "Im a mammal!!";
  }
 }
});

let's now create another class to inherit from our base class
var Wolf = Mammal.extend({
    instance: {
        _type: "wolf",
        _sound: "howl",
        speak: function () {
            return this._super(arguments) + " that " + this._sound + "s";
        },
        howl: function () {
            console.log("Hoooowl!")
        }
    }
});

let's take a look at this in action
var myWolf = new Wolf();
myWolf.howl() // "Hoooowl!"
myWolf.speak();// "A mammal of type wolf that howls"

For more reading here's the official documentation
... continue reading!

Sunday, 1 September 2013

Js Arrays: Functions

Okay so let's run through some javascript arrays.
I'm going to try and cover some of the more useful functions in array manipulation.
too start out we're going to use this five element array.

Our array

var v = ["a","b","c", "d","e"];
console.log(v.length); // 5

Index 0 Index 1 Index 2 Index 3 Index 4
a b c d e


v[9] = "j";
console.log(v[9]); // j
console.log(v[v.length - 1]); // j
console.log(v.length); // 10

Index 0 Index 1 Index 2 Index 3 Index 4 5 6 7 8 Index 9
a b c d e undefined j


PUSH & POP

~ works with the END of the array

PUSH

//push() appends one or more items to the end of the array
v.push("f");
console.log(v.length); // 6
console.log(v[5]); // f

And for your info push could also be achieved by
v[v.length] = "f"

Index 0 Index 1 Index 2 Index 3 Index 4 Index 5
a b c d e f

To add multiple elements
v.push("g","h","i");

POP

console.log(v.length); // 6
console.log(v[v.length]); // f

v.pop(); // f

console.log(v.length); // 5
console.log(v[v.length]); // e

Index 0 Index 1 Index 2 Index 3 Index 4
a b c d e


//pop() on an empty array returns undefined

UNSHIFT & SHIFT

~ works with the START of the array


UNSHIFT

v.unshift("f"); //prepends one or more items to the start of the array
//v.unshift("f","o","o");
console.log(v.length); // 6
console.log(v[0]); // f

Index 0 Index 1 Index 2 Index 3 Index 4 Index 5
f a b c d e


SHIFT

console.log(v.length); // 6
console.log(v[0]); // f

//shift() returns the first item from the array and shrinks it
v.shift(); // f

console.log(v.length); // 5
console.log(v[0]); // a

Index 0 Index 1 Index 2 Index 3 Index 4
a b c d e


Merging arrays

Concat

is used to join two or more arrays.
var tail = ["x","y","z"];
var num = ["1","2","3"];

//concat() returns an array of the joined arrays
var v2 = v.concat(tail, num); //["a","b","c", "d","e","x","y","z","1","2","3"]

console.log(v.length); // 5
console.log(v2.length); // 11

0 1 2 3 4 5 6 7 8 9 10
a b c d e x y z 1 2 3

Well I think that's me done for a while.
Till next time kids.
... continue reading!

Tuesday, 20 August 2013

Error: Cannot find module

I ran into this little node.js problem today. 
Error: Cannot find module 'autoloader'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (c:\node\web\auto\test.js:1:63)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Function.Module.runMain (module.js:497:10)

Was trying to use a package call autoloader. It installed fine but when I would try and run my node.js code. Bang, would get the above error.

This turned out to be a noob mistake on my part.
Fix: you can install a node package from any directory but to have it seen by node.js you need to install it while in the node directory.

  1. Navigate to wear your node executable is.
  2. Install your package as normal.. Done!

While I'm here I might as well talk a little bit more about installing packages(more commonly known as libraries). There are 3 things to know.
  • What: So node.js has a very minimalist belief where anything additionally that you need can just be installed. To this end there is the node package manager(npm). This is the best source to find and install packages for pretty much everything you could imagine to do with node.js/javascript. 
  • Where: Now, as with my above problem. If you run something like  npm install autoloader  it will create a directory called node_modules(if it does not exist already), download the library autoloader and install it into a subdirectory under node_modules. This is great but remember you need to be in the node.js directory so you ;node_modules are all in the same place so the node.js execute can find them. (This is also referred to as installing locally)
  • Global: There is an additional parameter in particular -g that can allow you to use the libraries from anyway where via your terminal. It does this by adding a path to the package in your environmental variables. The above autoloader example would then look like   npm install autoloader -g  

... continue reading!

Thursday, 15 August 2013

Basic web storage


Node.js + Coffee + mongoDB


Good morning boys and girls, today I would like to share with you a little something I've been working on. 

So I set out to build a web service that would 
  1. Read in a POST request on a Node.js server and save it to a mongo database 
  2. When a GET request comes in, return all the posted data
    (this is the normal type of message you receive from a browser. i.e. get me this page/image/thing..).
and for good measure let's make sure we're using coffeescript's class ability.

To get started you will need to install the mongoDB server.

There are very good step-by-step tutorials for all major platforms on the mongoDB site. so once you Install MongoDB. Fire it up to make sure everything is working fine.
navigate to wear the mongoDB executable is
cd /mongodb/bin

Now start you mongoDB server *By default, MongoDB stores data in the /data/db directory.
mongod

Output
Thu Aug 15 13:21:05.023 [initandlisten] MongoDB starting : pid=7444 port=27017 dbpath=\data\db\ 64-bit host=blackbolt
Thu Aug 15 13:21:05.025 [initandlisten] db version v2.4.4
Thu Aug 15 13:21:05.025 [initandlisten] git version: 4ec1fb96702c9d4c57b1e06dd34eb73a16e407d2
Thu Aug 15 13:21:05.026 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49
Thu Aug 15 13:21:05.027 [initandlisten] allocator: system
Thu Aug 15 13:21:05.028 [initandlisten] options: {}
Thu Aug 15 13:21:05.079 [initandlisten] journal dir=\data\db\journal
Thu Aug 15 13:21:05.081 [initandlisten] recover begin
Thu Aug 15 13:21:05.082 [initandlisten] recover lsn: 15263608
Thu Aug 15 13:21:05.083 [initandlisten] recover \data\db\journal\j._0
Thu Aug 15 13:21:05.085 [initandlisten] recover skipping application of section
seq:0 < lsn:15263608
Thu Aug 15 13:21:05.086 [initandlisten] recover skipping application of section
more...
Thu Aug 15 13:21:05.164 [initandlisten] recover cleaning up
Thu Aug 15 13:21:05.165 [initandlisten] removeJournalFiles
Thu Aug 15 13:21:05.167 [initandlisten] recover done
Thu Aug 15 13:21:05.332 [initandlisten] waiting for connections on port 27017
Thu Aug 15 13:21:05.332 [websvr] admin web console waiting for connections on port 28017


We can now just leave this running.

Now in a new terminal window we install the mongoDB driver for Node.js
npm install mongodb

So here we are going to have 2 files "server.coffee" and "mongo.coffee"

File: mongo.coffee
Here we have our class and the constructor. The constructors doing two things
  1. The 'response' being passed in is prefixed with '@' so it automatically becomes an attribute of the class.
  2. Creating our mongoDB connection

class myMongo
 constructor: (@response)->
  databaseUrl = "mydb"
  collections = ["randomValues"]
  @db = require("mongojs").connect(databaseUrl, collections)

Here we create the save function that is used for the post messages.
It's split into two functions, so "save" initiates the writing to database and "_saveCallBack" after the values have been stored.
*note: there 'saveCallBack' function starts with an underscore. This is to denote that the function is private

 
 save: (args) =>
  @db.randomValues.save(args, @_saveCallBack)
 
 _saveCallBack: (err, saved) =>
  if err?
   console.log(err)
   @response.write(err)
  else
   console.log("Saved #{JSON.stringify(saved)}")
   @response.write("will be saved")
  @response.end()

Here is a similar setup to "save" in that it has two functions but of course we are reading out the information that has been stored by the POST messages. You should know that line 13 is where the magic happens as it loops thru the return values outputting each on a new line("\n")

 
 find: =>
  @db.randomValues.find {}, @_findCallBack
 
 _findCallBack: (err, values) =>
 
  console.log "#{values.length} Requested"
  
  if err? 
   console.log err
  else if values.length is 0
   @response.write "No values found"
  else
   @response.write JSON.stringify(val)+"\n" for val in values
  @response.end()

Finally we use "export" to allow our mongoDB manager class to be used with other files.
module.exports = myMongo


File: server.coffee
Very simple to start off. I'm bringing in the HTTP module and the mongoDB source that will handle the reading and writing of our values.
http = require "http"
myMongo = require "./mongo"

Here's our request functional that will be run every time there is a connection is made.

There are four main thing happening here
  1. Set our HTTP header
  2. Created an instance of our mongoDB manager (mongo.coffee)
  3. Check if it a POST message and pass the values to be saved
  4. Else if it's a GET message and get the mongoDB manager to return all stored values

onRequest = (request, response) ->

 response.writeHead 200,
  "Content-Type": "text/plain"

 #pass in the 'response' object, so the mongoDB manager 
 #can use it to output the values on a GET   
 mongoConnet = new myMongo(response)

 if(request.method is 'POST')
  body = '';
  request.on 'data',  (data) ->
   body += data

  request.on 'end', () ->
   POST =  JSON.parse (body)
   mongoConnet.save(POST)

 else if(request.method is 'GET')
  mongoConnet.find()

Here is where we build our server. you can see we're passing in the "onRequest" function and listening on PORT:8888.
O, and a little message to let us know our server is up and running.
server = http.createServer()
server.on("request", onRequest)
server.listen(8888)

console.log "Server up and Ready to eat"

Now here comes two commands and you can run them in any order and see what you get. :D

This first one is the POST message that will store information into our database.
curl -i -X POST -H "Content-Type: application/json" -d '{"name":"brian","code":"sandwich"}' localhost:8888

Next we have the GET message that will retrieve our stored values.
curl -i localhost:8888

a copy of both source files is available on: GITHUB

... continue reading!

Tuesday, 6 August 2013

a nodes journey into the amazon


Node.js + Coffee + Amazon

Here I'm going to run through hosting your Node server on the Elastic Beanstalk

A super quick intro to Elastic Beanstalk: 
Amazon's Elastic Beanstalk is a deployment service that allows you to encapsulate different amazon services to provide a specific server and resource configuration based on your requirements. + There is no extra cost for this service. To find out more read Amazon's AWS Elastic Beanstalk Components

*Note: Beanstalk refers to each service collection as an "Application".

In this "Application", beanstalk will pull in 
Lets get started! ^_^


I am going to uses a directory called "aws" and I will use my Git basics as my server code. This is important as we will be using git to upload our code to beanstalk! In your command-line, go to this "awsdirectory. but we will also need a "package.json" file to tell our node server about our coffee source.

File: package.json
{
"name": "AmazonCoffee",
"version": "0.0.1",
"description": "An example of a nofe.js running CoffeeScript source",
"homepage": "http://www.codemeasandwich.com/2013/08/a-nodes-journey-into-amazon.html",
"scripts": {
   "start": "coffee index.coffee"
 },
"dependencies": {
    "coffee-script": "*"
  }
}

Console ~ Lets stage our files. 
 git add . 

Next we commit our staged file.
Console
 git commit -m "added configuration file "package.json" for Node to run index.coffee" 
You should get something like the following:
 [master (root-commit) 950b29badded configuration file "package.json" for Node to run index.coffee
 1 file changed, 19 insertions(+)

Next comes the real juicy bit! Deploying to AWS Elastic Beanstalk

You will now need to download & install the Elastic Beanstalk Command Line Tool 

Once you have downloaded the zip file. Extract it to your node directory.

Next you will need to add to your systems environmental variables.

Console - Linux: *Remember to match the python folder version with the version of python that you have installed
 export PATH=$PATH:node/AWS-ElasticBeanstalk-CLI-2.5.1/eb/linux/python2.7
On windows you will need to add ";c:\node\AWS-ElasticBeanstalk-CLI-2.5.1\eb\windows\" to your PATH in your Environment Variables. A good step by step can be found at How to set the path and environment variables in Windows

Back in our server AWS folder lets run.

Console
 eb init 
Next you will get:
 Enter your AWS Access Key ID: 
To get your key you can follow my Coffee and S3 tutorial.

With your ID and key in hand, enter your ID.
 Enter your AWS Secret Access Key: 
Now you can pick a region to setup you server
 Select an AWS Elastic Beanstalk service region. 
For me I picked 4) EU West.. just cos!

Next;
 Enter an AWS Elastic Beanstalk application name (auto-generated value is "aws"): 
and

 Enter an AWS Elastic Beanstalk environment name (auto-generated value is "aws-env"): 
Here you can just hit enter and it will use the defaults based on your working directory(highlighted in yellow). 

 Select a solution stack. 
 Available solution stacks are:  
 5) 32bit Amazon Linux running Node.js 
For this I went with option 5. You could pick 6, if you want a 64bit version
Next you will be asked what type of "environment" you want?
 Select an environment type.
 Available environment types are:
 1) LoadBalanced
 2) SingleInstance 

You're best off picking 2) 'Single Instance' as you will only need to 'Load Balanced' with a live site.
 Create an RDS DB Instance? [y/n]: 
We don't need a database right now, so "n"
Next pick a profile.
 Attach an instance profile
1) [Create a default instance profile]
2) [Other instance profile]
or hit enter and lets go with the default. 1

* You can change you Beanstalk configuration, by running the init command again.
For each setting you can just hit Enter to use the previous settings.

lets deploy our server ^_^

 eb start 
 Starting application "aws".
 Would you like to deploy the latest Git commit to your  environment? [y/n]: 
Lets go with "y".. This will take a while.. but you should be getting updates while(Really!) its deploying.

After it's done you'll be given a URL to access your node server. 
 Application is available at " ... .elasticbeanstalk.com". 
If you have any problems let me know ;)

... continue reading!

Sunday, 4 August 2013

Git basics



Here I'm going to run through the very basics of getting started with Git. Simply Git is used to store our server code. It is a LOT more powerful than that, but every needs to start with baby steps.

First step is to download/install the latest version of Git on your machine. 

Now I am going to build on my  node.js/coffee example

Once you have the source running. I want you to point your terminal to your directory where you have the coffee source saved.

Terminal

 git init 
Your path should now have "(Master)" at the end, but we now need to add our server code into the newly created repository.

Terminal ~ This will stage all the files
 git add . 
 * You can think of staging like adding to a list of files that you are ready to commit.

Next we commit our staged file.

Terminal
 git commit -m "First commit" 
You should get something like the following:
 [master (root-commit) 950b29b] First commit
 1 file changed, 19 insertions(+)
 create mode 100644 index.coffee 
Let do a quick test to make sure all is good we our server 


So far so good! but there is one small thing bugging me.. that console message when the server starts. lets make 2 small tweaks. We are going to print out the port number and make the port selection more dynamic by adding have an optional argument when starting the script to specify the port. Else check if there is a predefined port of servers to start on.

First we will read in the port number from the command line. For this we will need process.argv which is an array containing the command line arguments. The first element will be 'coffee', the second element will be the path to our file and the last element will be the port number. The second part is process.env.PORT this will try and pull a port number from the global environment variable.

Add at the top of the script

port = process.argv[2]
port ?= process.env.PORT
port ?= 80

Replace line 15 & 17 with the below !! don't forget the indentation !!

 http.createServer(onRequest).listen (port)

 console.log ("Server on port #{port} has started.")


The changes above will read in a port value, If one can't be found 8888 will be set as a default. The second part sets the port number and will output the number when the server is started.


Terminal
 coffee index.coffee 8889 
You should get something like the following:
Serveon port 8889 has started.

Now lets commit our newly modified file with the following two commands

Terminal ~ This will stage just the index.coffee file
 git add index.coffee 
 git commit -m "the port number can be passed as a command line argument and the port number will be displayed on terminal"
And that is for now.
... continue reading!