Quad Titan X Build: Phase Two

This is my third post related to my Quad Titan X build. You can find the introduction here and the first phase here. The introduction post provides a full list of components, including even links to more information on them or where you can pick them up.

So in Phase One we focused on the case itself – taking the case apart, getting it custom powder coated, and then putting it back together. Today I want to focus on the initial assembly and some of the key first prep work. First, lets see what we’ll have at the end of today.

IMG_6535

Starting to look nice already.

Order of Operations

When planning out a build like this there are a lot of moving parts. The structural components, the cooling infrastructure, the electronic connections, water pathways and valves, and through all of that your own hands attempting to put it all together. I am going to try to describe below the order that I did things in, but I cannot stress enough – if you don’t think about it ahead of time, you will end up in a situation where you have something put into the case and suddenly realize “oh crap, I can’t reach/can’t slide in/will damage this new thing I am trying to put in because that other thing I just put in is in the way”. I did this a couple times during my construction and I spent some thing thinking it through.

In particular, those darned connection points for the water cooling. Now, it may be that some are easier, but man were those compression fittings a pain in the neck. I happened to pick XSPC fittings that were sexy looking – awesome dark color with a shine, nice and slim look – it was a royal pain to get them on though and I often found myself pulling back on them to find I didn’t quite get it screwed in. This was exacerbated by all the STUFF in the way. I strongly recommend thinking carefully about when you want to twist on these fittings and if you can do it before you put other things in the system.

Ok, enough warning, lets get on with it.

Prepping the Radiators

Looking at my build I knew I wanted to get the two big radiators in early, as I wanted to be able to see how the water line routing would need to go when planning the rest of it. I cannot stress enough – it is VERY IMPORTANT to prep your radiators before you put them in. Radiators do NOT come from the manufacturing process clean. The manufacturer may even say they clean them, but it doesn’t matter – in shipment all the banging around causes new dirt to disengage from the radiator and metal fragments to float around.

I’ll describe the process here, but here is a video I watched before doing mine that really helped me out: How to Clean a Radiator

So you take your radiator and you get distilled water and white vinegar (1 part vinegar to every 4 parts water) – it is very important to use distilled water, not tap water! Most tap water has a lot of particles in it and you really want this to be clean – those particles in addition to getting in the radiator can cause other particles to stick and remain inside the radiator, then when you go to put your coolant in they will suddenly flush out – not a good thing.

Poor a mixture of distilled water and white vinegar into the radiator getting it pretty well filled up, let it sit for a while so the vinegar goes to work – I left mine overnight. Then empty them out and do this again – after the first few times, switch to a process where after letting it sit a bit you then shake the crap out of it with your fingers over the entry/exit holes. Then dump the water out and do it again. And again. And again. For my radiators this was a days work of letting it sit for a long time, shaking and dumping.

I recommend you dump the radiator into a clear container, then hold the container up and look at what you have. You’ll find lots of black and metallic particles and junk in the water. Keep doing this until it becomes crystal clear – then do it a few more times because you can’t see everything.

This may sound to you like a lot of work. For me I felt doing it this way was best since I was doing only one build and didn’t expect to be cleaning out any radiators in the future. You may decide to be more industrious, and take the approach of this fellow who used an aquarium pump to run a constant flow through the radiator to clean it out: How to clean your new watercooling radiator

Whatever you do, make sure you clean those suckers out. While you are at it, now might also be a great time to do the same for the reservoir – we aren’t going to install it in this step at all, because it would just get in the way, but you might as well get it all done. If you want to be really careful about it you can also clean your tubing and fittings as well, just to be sure.

Installing Some Cooling

At this point in my build I took on the task of switching out all the fans – you can find them in my introductory post – from the black and white configuration they initially came in to the alternate black and red setup I wanted. This build uses a LOT of fans – about 20 – you might think “man, that will be loud!” but in fact in order to make the system quieter I used more fans. By using high static pressure, low noise fans – and using more of them – you can run the fans at lower RPMs and then barely hear them. This build is much quieter than my smaller builds that only had 3 fans.

I started by replacing the two larger fans in the front of the case. It is important to pay attention, for all the fans, to the direction of air flow. You want to push in and push out of your radiators and you want to have air flow through the case, in one side and out the other. There is a myth that you always want the air from the radiator to flow out of the case. In reality, the air flowing through the radiator is about the ambient temperature the inside of the case will be anyway – that is the entire point – and it is far more important to have good directional flow of air through the case than it is to point all the radiator air flow out.

The larger 480mm radiators required that I screw the one side of fans onto the radiator, and then I placed the fans between the case and the radiator and slide a long screw from case wall through fans into radiator to secure it. This took a bit of doing as the fans were wanting to move all over the place but with some determination you can get it done.

Make sure the screws are not too long or when they screw into the radiator they could puncture the thin coolant line.

I left my smaller, single fan radiator for the back off at this point, because it would get in the way of the next step. What I did do though was to go ahead and make good use of those fan cable splitters. I wired a bank of four fans in two in/out pairs to a four-way splitter, then wired the two splitters per large radiator into a two-way splitter. This was a great point to start some simple cable management – make liberal use of cable ties throughout the back of the case to keep the cabling in the front clean. At this stage though, you aren’t sure what all you want to cable tie in on place, so what I did was use twist ties as temporary holding, and then I added to the twist ties various cables over time until I was done, at which point I made it permanent with a cable tie. The motherboard tray has lots of little places to hook these ties in to keep the cabling under control in the back.

Here’s a pic of the bottom radiator/fan setup – just some eye candy – with the access panel flipped open.

IMG_6538

Power Supply

Not much to say here on installation – power supplies are very easy to install, just some screws on through the back panel of the case. I will say I highly recommend you get a modular power supply, to avoid a bunch of additional cord sitting around, and secondly make sure you get a power supply with adequate juice. A build like this takes a LOT of power…you have the pump, the video cards, the motherboard, the CPU, all the fans, etc to power. For me this absolutely necessitated a 1500W power supply. Make sure you do the math for yours.

By the way, that power supply fan pretty much never, ever runs. Really nice unit they sell at this level, you can find the full details in my introduction post with all the components listed.

Motherboard Installation w/ Waterblock, CPU, Memory

At this point, I wanted to install the motherboard however I had a bit of an order of operations item – the waterblock install required access to screw holes I wouldn’t be able to get to once the motherboard was in the case. So I did the CPU and waterblock installation outside the case. For me, the best place I found to work on a motherboard is on a table, in a room without carpet, sitting on top of its static reduction bag which can sit on top of the box it came in. This provides a nice working surface.

Since we are talking about the water block I want to highlight something important with water cooling systems. It has been found that mixing metals can cause problems. This means if you have a copper based water block, you need to make sure the rest of your blocks, radiators, etc will work with this. Some metals cause other metals to corrode and you will be running water through all of it in a loop. You also need to make sure that whatever liquid you mix yourself or coolant you buy to run through the loop is safe with the metals you plan to use.

So I took the water block and I carefully cleaned it – usually water blocks like this come with directions on what to clean them with and mine did so I followed those directions. If you aren’t sure, do some googling on water blocks with the internal surface area you are using and how best to clean – it all depends on the surfaces.

I installed the CPU per the directions that come with CPU and motherboard (pay attention to how to orient it!). I then prepped the motherboard and the water block. In my case this involved removing a heat sink from the board carefully (I had to break the metal bar that ran from one heat sink to another, just some simple twisting) and then layer on pieces of thermal tape in the appropriate places. It is important you use the right thickness of thermal tape in the correct places for best results – if the water block does not touch something it needs to, you may find yourself with a fried component.

I then covered the CPU with a uniform, thin coating of thermal paste and laid the water block on. I had to hold the waterblock with the board flipped over to do the screwing on the back side. After done, take a close look and make sure you see active contact between component/chip, thermal tape and water block wherever you can. Make sure it is screwed in well but obviously don’t split the motherboard with your herculean efforts.

Screwing the motherboard then into the case is a simple process, just be careful to follow the motherboard instructions and put the screws only in the holes indicated.

At this point you should also screw in the port fittings on the radiators and water block if you haven’t already. Like with all water fittings, you screw hand tight.

I also at this point went ahead and installed my memory, since it was low profile (wouldn’t get in the way of anything) and looked cool with the red heatsink covers.

Installing the Pump and Pump Controller

An important note about the pump in this configuration – with quiet fans in this kind of setup, the water pump could easily be the noisiest part of your system if you do the naive thing and just screw it into the bottom of the case or something. The pump vibrates and that vibration will transfer to whatever surface it touches.

As such for my build I decided to avoid the screws and instead used a noise suppressing double-sided sticky gel pad – you can find what I used in the introductory post. This stuff is often used to reduce the vibration of motors on hobby radio controlled airplanes and quad-rotors – it has worked out great in this application. I just stuck it on the bottom of the pump then placed the pump down in the bottom of the case.

This is a good time to point out – make sure you know the direction of your pump (usually water in the front and out the top, but it varies) and think about how your water lines are going to run – make them flow in natural arcs as much as possible, not hard turns. For me this meant facing the pump towards the motherboard, in the direction to where the reservoir would be once assembled. I played with putting the reservoir in temporarily and holding up some tubing to find the best spot before permanently sticking the pump in there.

I also at this point slide the drive bays back in the case and installed the pump controller in the lowest one – I again used that double sided sticky stuff, in this case just because it was sticky and I didn’t want to drive screws. In my case I used a dual voltage pump – the 450S – which can run at 12V or 24V. At 24V it pumps at a much higher gal/minute which is a key metric for how well your system will cool the components, so I had to get a controller that would provide 24v of power. To do this the controller had to hook into two 12V sources…important to make sure you have a dual rail power supply or fully switching.

I don’t have a picture right at this step, but here from a bit later shows the pump ready to go.

IMG_6554

Back Radiator and Fans

To finish up today, I’ll mention putting on the back radiator and fan was also a cinch. Just screw the fan on front and the radiator to the back wall. I had an “oopsie” moment here though – I had purchased two 140mm fans to do this in push/pull configuration. What I found was that with the second fan in place, the nozzles were too close to the upper radiator and I didn’t have room to get the water tubing in. If I flipped it then it would interfere with the video cards later. As such, I only installed one fan in a push configuration, putting it inside the case – you want to push, not pull, if you are only doing one fan and it looked nicer this way anyway.

IMG_6536

Here we are, ready for the next step:

IMG_6539

With that we’ll call it a day – cheers.

Posted in Wordpress | Leave a comment

Quad Titan X Build: Phase One

I introduced my Quad Titan X system build, along with lists of components, in this post: Introduction

The description of how I completed the build is a bit long, so I am splitting it up into multiple posts. I will try to cross-link to make navigation easier. On to the content.

The first phase of my build is all about preparation – preparing the house and ordering the starting materials. I will also go into the case and powder coating job a bit.

Preparing the House

When I started this project it took me a while to come to a key realization – this beast was going to suck a LOT of power, particularly when running complex benchmarks or floating point calculations across all four GPUs. A bit of napkin calculation told me that it could – and later I found in some situations would – utilize almost the total acceptable current of a 15 AMP circuit. At the time my office was on a shared circuit with the living room – including its associated entertainment center – and (god only knows why) the microwave in the kitchen. This wouldn’t do.

I happened to know a friend with background as an electrician who helped me out. We grabbed a new 15AMP breaker (don’t forget for living spaces today code requires AFCI breakers, which are fairly expensive) and a bunch of roman cable. Installing the new breaker was a snap – if you haven’t done it before, this YouTube video can help: How to Install an AFCI Breaker.

Running the cabling was aided by the fact that my office thankfully sits on top of an unfinished part of my basement, so I was able to install a new outlet and wire it back to the panel without much trouble. When doing a new outlet for the purpose of a computer system or electronics, one option you have is to consider a surge protected outlet – this enables you to avoid having a big honking surge protector sitting on the floor. Here is an example: Leviton Surge Protected Outlet – just make sure it is rated for enough current for your application. Obviously this doesn’t do much for you if you are going to need a power strip anyway – in which case I suggest picking up a surge protector strip you can hang on the wall, like this one: Belkin Wall Mount Surge Protector. I used a setup like that behind my entertainment center, and to avoid the cable popping out unexpectedly I used a staple gun with plastic guarded staples to secure the power cabling along the wall – works a charm.

If you have never installed a new outlet, this video can help: Installing an Outlet in an Existing Wall. If you aren’t sure what you are doing, definitely consider finding an electrician – I will say that larger electrical companies will charge an absolute fortune to do a new “home run” – in my case, I asked one company which told me $1200. I think I spent less than $50 doing it myself. Find a smaller electrician who isn’t trying to “get rich quick” on simple jobs. Also keep in mind you may need to get an electrical permit and inspection done where you live – check with your local government if you aren’t sure.

With the electrical run the house was now ready to accommodate the beast.

Preparing the Work Space and Final Location

It is important to consider just how BIG a case like the one I used is. Pay close attention to those dimension metrics on the online store page. It is unlikely you will find a desk that a beast like this can fit under, and you’ll need lots of space to put it all together. Especially when doing a custom water cooling solution, you’ll find yourself in the case bending all sorts of different ways to get to those darned compression fittings to tighten them just right, and these gymnastics require adequate room.

For my purpose I used the eventual home of the beast as the location for doing the build – my office room in my house – however you may want to consider using space in a basement or garage. Make sure you can keep it all away from children and that you won’t cry if you spill some of the coolant on the floor. Also important to ensure your living partner won’t randomly happen along and “clean up” your carefully organized parts and fittings. Speaking of living partner, make sure wherever you are doing the build, it won’t cause an unholy family war.

Another thing worth noting is static electricity. When I first did a PC build I used to always wear a static wristband, and it isn’t a terrible idea. To be honest though these days I find the risk a little overblown with modern components, and some careful steps will help. Try to avoid putting electrical components together on carpet, also grab a metal part of the case before you touch the electronics. This will discharge any static before you touch the sensitive component. Also wear some shoes…remember how you could run around as a kid in socks and then shock somebody with a touch? Yeah, don’t do that to your $600 motherboard.

Here is the case, fresh out of the box, ready in my chosen work space:

IMG_6528

Since we are starting to show actual components, lets dig in a bit to acquiring all these beauties.

Purchasing Components

As a reminder I listed out all the components (which I could remember anyway) in this introductory post here.

Price is a major factor when buying PC components – you should absolutely shop around. But keep in mind a major part of the pricing will be shipping – and buying multiple components from the same place may reduce that cost significantly. For me even better is to avoid shipping costs altogether and invest in Amazon Prime – it costs $90/year these days I think and offers free shipping plus a bunch of other benefits. I easily saved well over the $90 investment in shipping costs when purchasing my components. It also helps a lot when you put that order in and realize you forgot that last pack of compression fittings…but no big deal, since getting that little pack shipped to you will be free too.

Some things though you just can’t get at Amazon. I was actually very surprised how much I could get there…but for example, getting 4 Titan X cards was impossible – I had to order from multiple sources. When doing liquid cooling often special fittings or tubing or water blocks may not be available. I recommend FrozenCPU as a good source for hard to find items. In one case I had to order directly from the manufacturer – the EK monoblock for my motherboard. That was a bit risky but sometimes you have to do what you have to do, it was clearly the best option for my needs.

Prepping the Case

So when doing a build like this, I think it is important to start from the ground up. You want to give yourself as much room as possible to move around and install your components and you want to then put things back in as late in the process as possible but no later. You should start by pulling out things like removable drive bays, dust filters, drive securing devices like the plastic lock-ins used in my case, etc. Clear all that out of there and carefully place it somewhere organized, give yourself room to get that motherboard in and do all that wiring.

For my situation, I had to go several steps further – I wanted to get parts of the case powder coated and to do that you need to separate everything out to the pieces you want to have done. This meant unscrewing pretty much any screw that was in the case, pulling off things like the back end with legs and separating the grill from it. Once everything was unscrewed, there was still more work to be done. Here is a picture of rivets (in the top left) securing the motherboard tray in place. There were several clusters of rivets on various sides of the case securing it, and even rivets in the back panel securing the grill work.

IMG_1576

To remove rivets, you need a drill. You put a drill bit in that is the size of the rivet head, you then ‘drill out’ the rivet – essentially using the drill to shred the head so the rivet body then falls right out the other side. This is hard to describe in text, so here is a helpful video I found on the subject: Remove POP Rivets from Computer Case

Here is a picture of the parts of my case laid out, ready to go out for powder coating:

IMG_1580

Everything shown here I had powder coated in red with one exception. That thin grill on the far left – when I took it to the the shop they were concerned it would warp too much in the process, so they recommended I not do it. That grill goes on the top of the case, and it would have looked cool to offset the black on the outside, but it isn’t much of a loss.

IMPORTANT POINT: What comes apart must go back together, and in particular I found there was a wide array of screw sizes used throughout the case. My solution was to grab a box of sandwich bags (you know, those little plastic baggies, preferably with ziplock tops, that your mother used to pack your PB&J in). I put sets of like screws into a baggy and labeled it with a Sharpie marker. I did this with other pieces too, like the rubber gromets that go in the cord routing holes. You can’t over-organize here…you don’t want to be sitting with a case you can’t figure out how to put back together. Be descriptive with your labeling.

Powder Coating

Powder coating is an awesome, durable way to get a paint job for metallic components. It does come with some real risks, as I was informed by my local shop. With the components being so thin and light weight compared to what they usually work with, the shop was concerned the material would warp in the process. To avoid this, we skipped doing the very thin grill and then also I had them skip the step of blasting off the existing coating that was already on the case.

Now there are services online which you can ship your stuff to, or if you happen to live close to one then even better, which specialize in these types of thin, light weight metals. They will carefully strip all the existing paint, put on a new coating and while doing so block out the screw holes so they don’t shrink up on you – all very useful. If you are nervous or want a truly stellar job, by all means go for it – you’ll pay hundreds of dollars though. For me, I went to my local auto powder coating shop – I handed the fellow $100 and said “I’m willing to take the risk that you don’t work with this sort of thing normally.” I think they had a lot of fun doing it too, and the results were fantastic. Not having the underlying coat stripped makes the resulting powder coating less resilient, but this is a computer case – it is not going to be driving down the road at 70 mph with rocks flying at it. Sitting in my office it holds up just fine.

Here’s a pic of everything back from the powder coater, I love that red – it goes extremely well with the reds in the motherboard, fans and memory – really gives the whole thing a distinctive look:

IMG_6529

Here are some close ups so you can see the quality complete coverage that powder coating provides:

IMG_6530

IMG_6531

Unfortunately the lighting in these pictures does not do the color a lot of justice, but we’ll have a picture in a bit that shows it all coming together.

Re-assembly

It is important to remember how everything came apart and put it back together. This is where a cheap pop rivet gun with some rivets (see the miscellaneous section in my components list here) comes into play. They are real simple to do – just follow the instructions that come with the rivet tool. Then remember to screw together all the case parts – you DID label all the screws, right?

At this point you should have a case back together and ready for further assembly. Don’t over-assemble here…you shouldn’t put things like drive bays back in yet if you can avoid it. Leave out things that slide in easy later and will just get in the way.

I unfortunately don’t have a good picture of this step, but this pic from a later phase of the build shows nicely things coming back together:

IMG_6538

A cautionary note when it comes to powder coating – it adds a layer everywhere. Unless you use a high end service that knows what to do with these parts, it means that many of the screw holes can and will shrink – in particular this was a pain in the motherboard tray. It took a wrench to get the motherboard standoffs to grind back into their screw holes, same with the screws on the PCI back plates, but it all worked out.

I hope this was helpful and/or interesting – I look forward to sharing more in coming posts.

Posted in beast, intel, pc build, titan x, water cooling | Tagged , , , , , , , , | Leave a comment

Computer Build: Quad Titan X with Custom Water Cooling

Introduction

I’ll be describing the build of my latest PC, which I refer as The Beast. It runs Quad (yes, 4) Titan X graphics cards, 32GB of RAM and some decent intel processor action. I overclock the entire system fairly significantly and all hot components, with the exception of the hard drives, are under water cooling – including water blocks for all GPUs, the processor and mainboard components. The system has two large and one smaller radiators and features 20 low noise/high static pressure fans. Due to the use of large radiators and a high number of fans all in push/pull configurations, and with the help of some vibration suppression for the water pump, the system runs whisper quiet – far quieter than your typical rig with a couple cheap fans. I also had some custom powder coating done for the paint job.

I did this build for myself about a year ago and kept meaning to share here but for whatever reason I just never found the time – until now. After looking at my monster PC this afternoon I decided it was time to get it done. So I will be doing a series of posts here on the build of my custom system. I purchased the vast majority of components via Amazon but also had to reach out to FrozenCPU for certain specialty items. I will start with some pictures so you can get a sense of the final product and then try to enumerate all of the components.

Main Cabinet

Front View

IMG_6618

Closed Full Side View

IMG_6612

Final Open Full Side View, for Good Measure

IMG_6617

The case comes black by default, the red is custom powder coating. Let’s dig in.

Admittedly it has been a full year since I started this build, so I will probably miss some important components, in particular when it comes to cooling hardware where I had to make special orders. Forgive me the lapse but this should cover the vast majority of what was required.

System Components

Cooling

Miscellaneous

Existing Items

  • Blu-Ray/DVD-RW drive from old build
  • 4 old hard drives, including some old Caviar Black and a Corsair SSD
  • Various cables, such as SATA cables for old drives etc, from previous build where appropriate

Building this system was a labor of love and I’m excited to share the details including lessons learned. In subsequent posts I will walk through the major steps of the build and also discuss what went well, what was difficult and what was exceptional.

Phases

You can find the first phase of the build out for this system here: Quad Titan X Build: Phase One

Posted in beast, intel, pc build, titan x, water cooling | Tagged , , , , , , , | Leave a comment

Backbone: A view and ownership of it’s element

When writing modern applications using Backbone I often find myself relying on past experience with object oriented architectures to help me structure an application. This is rather natural as the first programs I ever wrote were in an object oriented language (MOO for the curious). While often hailed as one of the primary contributions of the object oriented paradigm to modern software development, encapsulation actually has a much more varied history, and it is encapsulation I want to talk about here.

When writing an application in Backbone one of the primary abstractions is that of the view – I happen to also use AMD in my applications so I’ll leverage that syntax here. Here’s an example of a simple view:

define( ["jquery", "underscore", "backbone"], 
   function( $, _, Backbone ) {
         var myView = Backbone.View.Extend({
            render: function() {
               $( '#content' ).html( "HELLO WORLD!" );
            }
      });
 
      return new myView;
   }
);

I’ve seen dozens of examples and tutorials that take this approach to views. On the outside there is nothing immediately wrong with it – clearly there is encapsulation, we define a function which acts as a closure denoting the ‘module’, and we give it some dependencies. We further have a view object – and in fact we have a single view object, as returning a new instance of myView gives us something very much akin to a typical singleton pattern.

However, when I look at examples like this what I see is a sham of encapsulation – we have split our code, but we have not placed proper boundaries on the actual impact of that code. Namely, this line:

$( '#content' ).html( "HELLO WORLD!" );

breaks what I see as a good structural principle. By making use of jQuery to select an element on the page, the view now has to ‘know’ things about its outside world that it otherwise would not need to be privy to. Additionally, it puts constraints on any application that would use this model – and encourages an almost spaghetti effect of interdependencies (how do I know somebody else isn’t modifying the html of the content element?).

I think views should maintain two major properties to be effective units of encapsulation:

  1. The view should not interact with DOM elements outside of itself
  2. The view should own its element

Taken together, we come up with a paradigm that looks more like this:

define( ["jquery", "underscore", "backbone"], 
   function( $, _, Backbone ) {
      var myView = Backbone.View.Extend({
 
         tagName: 'div',
 
          id: 'MyViewDiv',
 
          className: '',
 
          render: function() {
             $( this.el ).html( "HELLO WORLD!" );
             return this;
          }
      });
 
      return new myView;
   }
);

Some higher level object that owns content might then do something like this:

require( [ "views/myView" ],
   function( myView ) {
      $( "#content" ).html( homeView.render().el );
   }
 );

In such a structure all element lookups could also denote the this.el as a parent element, ensuring that you don’t run into code that modifies similarly named tags written out by some other poor unsuspecting view.

In either case, however you decide to approach it and how you want to express views in a backbone application, it is important that consistency be attained – nothing would be more confusing that to have a mix of views that live independently and views that act as all-knowing modifiers of the page around them.

Posted in backbone.js, encapsulation, Javascript | Leave a comment

Issue Retrieving Collections from Jersey REST Services

Ran into a painful issue today when attempting to work with a sample application I was putting together. In my application I use JPA to retrieve data from an Oracle database, which I then expose via a Jersey service (running on WebLogic 12c). My front end is using Backbone.js and I was attempting to populate a backbone collection with the results of the jersey rest call.

So the REST service was written quite simply:

@Path("/productcategories")
public class ProductCategoryResource {
 
	@Context ServletContext context;
 
	/*
	 * Retrieves a list of the top level product categories and their immediate children
	 * Implements the GET on this resource, can return values in JSON or XML
	 */
	@GET
	@Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
	public List getProductCategories() {
 
		ProductCategoryBll bll = new ProductCategoryBll();
		return bll.listParentProductCategories( context );
 
	}
 
}

Using cUrl I was able to see that what I retrieved looked like this:

{"productCategory":[{"categoryDescription":"Office Supplies","categoryId":"2","categoryLockedFlag":"N","categoryName":"Office"},{"categoryDescription":"Consumer Electronics","categoryId":"3","categoryLockedFlag":"N","categoryName":"Electronics"},{"categoryDescription":"Books, Music, and Movies","categoryId":"1","categoryLockedFlag":"N","categoryName":"Media"}]}

This is perfectly valid shorthand JSON – it has three objects, and all are denoted to be of type productCategory. Backbone however (or maybe it is jQuery under the covers?) really did not like this – it instead was creating a collection with one thing in it – a productCategory, and then trying to parse this as my ProductCategory model object which obviously failed.
Here is my collection that was failing to parse correctly:

window.CategoryCollection = Backbone.Collection.extend({
	model: ProductCategory,
	url: "webresources/productcategories",
});

The fix was pretty simple once I realized what it was doing – I needed to override the collection’s parse to handle the funk it didn’t like from the Java server (alternatively I could have altered the Java end, but I really didn’t want to mess with it at the time, and it is perfectly valid JSON syntax). Here’s the altered collection:

window.CategoryCollection = Backbone.Collection.extend({
	model: ProductCategory,
	url: "webresources/productcategories",
 
	// Override parse as the default JSON Jersey returns with a collection does not work with Backbone
	parse: function( resp, xhr ) {
		return resp.productCategory;
	}
});

’til next time…

Posted in backbone.js, Javascript, jersey | Tagged , , , , | 1 Comment

More on Node VM

So I wanted to understand a bit more about what is going on under the covers with Node VM. To do that, I pulled open the node code itself. To start with, when we do a require(‘vm’) we are referencing the builtin vm module, which is contained in Node’s libs folder under the name ‘vm.js’. The code for it is quite simple, so I’ll past it here:

var binding = process.binding('evals');
 
exports.Script = binding.Script;
exports.createScript = function(code, ctx, name) {
  return new exports.Script(code, ctx, name);
};
 
exports.createContext = binding.Script.createContext;
exports.runInContext = binding.Script.runInContext;
exports.runInThisContext = binding.Script.runInThisContext;
exports.runInNewContext = binding.Script.runInNewContext;

This is from the version I am currently running which is Node 0.4.9.

What we see here is a call to process.binding to access ‘evals’ in the node C++ code. The rest is mostly just mapping logic, giving us the various methods we have already been using by mapping them to the methods in the C++ code. Pretty simple. To understand what is actually happening here though, we have to jump down into the land of C++.

In the src directory for node, in the file node_script.cc, we find the method that does the real work – WrappedScript::EvalMachine. Taking a look at this, we can get a sense of what differs between passing in a context via runInContext vs runInNewContext and runInThisContext.

The first significant time we see a differentiation is here:

  if (context_flag == newContext) {
    // Create the new context
    context = Context::New();
 
  } else if (context_flag == userContext) {
    // Use the passed in context
    Local<Object> contextArg = args[sandbox_index]->ToObject();
    WrappedContext *nContext = ObjectWrap::Unwrap<WrappedContext>(sandbox);
    context = nContext->GetV8Context();
  }

We can see that if we do a runInNewContext, we must create a new context object. On the other hand, if we pass in a context object previously created we instead perform a variety of gyrations to ‘unwrap’ the context and get the V8 context of it.

Later, we also find that disposal is quite different:

  if (context_flag == newContext) {
    // Clean up, clean up, everybody everywhere!
    context->DetachGlobal();
    context->Exit();
    context.Dispose();
  } else if (context_flag == userContext) {
    // Exit the passed in context.
    context->Exit();
  }

It is clear from our performance results that the object generation and subsequent detach/dispose is expensive enough to make a noticeable difference in our run time.

We also find this code which occurs whether or not a user is doing a new context or passing an existing one:

  // New and user context share code. DRY it up.
  if (context_flag == userContext || context_flag == newContext) {
    // Enter the context
    context->Enter();
 
    // Copy everything from the passed in sandbox (either the persistent
    // context for runInContext(), or the sandbox arg to runInNewContext()).
    keys = sandbox->GetPropertyNames();
 
    for (i = 0; i < keys->Length(); i++) {
      Handle<String> key = keys->Get(Integer::New(i))->ToString();
      Handle<Value> value = sandbox->Get(key);
      if (value == sandbox) { value = context->Global(); }
      context->Global()->Set(key, value);
    }
  }

Additionally, there is this set of code which occurs to copy the values back out to the object used from javascript:

  if (context_flag == userContext || context_flag == newContext) {
    // success! copy changes back onto the sandbox object.
    keys = context->Global()->GetPropertyNames();
    for (i = 0; i < keys->Length(); i++) {
      Handle<String> key = keys->Get(Integer::New(i))->ToString();
      Handle<Value> value = context->Global()->Get(key);
      if (value == context->Global()) { value = sandbox; }
      sandbox->Set(key, value);
    }
  }

Looking at all these however, it is important to note that these are if and else if statements – so all of this code (along with a few other tidbits) are ONLY executed if the context is to be new or user provided. There is a third option in the code – which is to say, runInThisContext. None of this code executes in a such a case, which seems consistent with the significant performance difference we see between runInThisContext and the other options.

It is also important to note that when supplying a context, the way values are communicated back and forth is actually via a copy operation – the scripts is not directly editing the object.

Posted in Javascript, Node.js | Tagged , , , , | 3 Comments

Node VM Continued

One thing I noticed today is that this works:

var util = require('util');
var vm = require('vm');
 
var contextObject = {
}
contextObject.contextMethod = function(text) {
console.log(text);
}
var myContext = vm.createContext(contextObject);
myContext.contextMethod2 = function(text) {
console.log(text);
}
var scriptText = 'contextMethod("Hello World!"); contextMethod2("Hello Universe!");';
var script = vm.createScript(scriptText);
script.runInContext(myContext);

Which in general makes sense, but it is nice to see that you can modify the context.

Posted in Javascript, Node.js | Tagged , , , , , , | Leave a comment

Node.js Virtual Machine (vm) Usage

For my project I want to have have run time mutable code. There may be some better ways to do this in node that I do not know, but the easiest I could find in base was the vm module. I did some experimenting and here’s what I found.

To use the vm module you need to ‘require’ it:

var util = require('util');
var vm = require('vm');

Grabbing util as well in this case just because I prefer the logging methods on it to the standard console.log, as the time stamps help me keep straight what I was running when.

For my quick test, I’ll just do some hello world.

var util = require('util');
var vm = require('vm');
 
vm.runInThisContext('var hello = "world";');

So this takes the code inside the string, compiles it via JavaScript V8, and executes it. Really cool, but unfortunately there is no external representation of what happened. Lets make it print something out.

We might think to try something like this:

vm.runInThisContext('var hello = "world"; util.log("Hello " + hello);');

However, we would find that this causes a syntax error and throws up on us complaining that ‘util’ is not defined. There is a rather subtle reason for this. the runInThisContext method of vm does use the current context, however it does not have access to local scope, and we defined util in local scope by using the ‘var’ keyword.

If you change the first line to remove the ‘var’ keyword, then running it will give a result like so:

17 Aug 23:41:32 - Hello world

Anything defined as a global variable is accessible to us with runInThisContext. A good thing if you want to have access to those global variables, a bad thing if you would prefer to limit what the script has access to. For instance, with runInThisContext you can do things like this:

vm.runInThisContext('var hello = "world"; console.log("Hello " + hello);');

Assuming this is trusted code, that can be fine – but if it isn’t trusted code, or if (in my case) it is trusted but you want to explicitly encourage it to conform to a set API for interacting with things outside of it, you may wish to exclude the dynamic script you are running from having access to the global context. Fortunately, vm has a method which does this called runInNewContext. For example, this next line will not work because runInNewContext creates a new, ’empty’ context for the script to run in rather than using the existing one – the script then has access to nothing outside of what JavaScript V8 itself provides – it cannot access global node functions.

Fails:

vm.runInNewContext('var hello = "world"; console.log("Hello " + hello);');

It will say that ‘console’ is undefined as it no longer has access to the global scope where console is contained.

So that is good – we have a way to limit the access the script has, but we need to be able to provide it with something in order to have it effect anything outside of itself and be useful. We do that by providing the context, or ‘sandbox’, for it to use via the optional second argument. Here’s an example:

var util = require('util');
var vm = require('vm');
 
var myContext = {
   hello: "nobody"
}
 
vm.runInNewContext('hello = "world";', myContext);
 
util.log('Hello ' + myContext.hello);

The second argument takes an object, the variables of which are injected into the global context of the script. It is my understand that this passing is actually done via some fairly sexy copy operations, so perhaps a relevant performance note to make is that the size of this context is probably a significant factor (will need to do some testing myself to see). Similarly, you can of course pass in functions with the context – those functions may utilize calls outside the sandbox object itself, such as this:

var myContext = {
}
myContext.doLog = function(text) {
	util.log(text);
}
 
vm.runInNewContext('doLog("Hello World");', myContext);

And of course we can define whole object structures as such:

var myContext = {
   utilFacade: {
   }
}
myContext.utilFacade.doLog = function(text) {
	util.log(text);
}
 
vm.runInNewContext('utilFacade.doLog("Hello World");', myContext);

Though I have found at this point we begin to get my JavaScript editor of choice confused about what is legal and what is not.

Stepping back for one second, I wanted to note that it is important to think about what is going on here. We are feeding text in, which is compiled at the time runInNewContext. Depending on application, it may not be desired to compile it at the time you run – we might instead want to do this step before hand. This is accomplished via the Script object, like so:

var myScript = vm.createScript('var hello = "world";');
myScript.runInNewContext(myContext);

And we can still include calls to our context, so this works fine:

var myContext = {
  utilFacade: {
  }
}
myContext.utilFacade.doLog = function(text) {
	util.log(text);
}
 
var myScript = vm.createScript('utilFacade.doLog("Hello World");');
myScript.runInNewContext(myContext);

That said, it is important to understand that this is not very safe, as by the very fact that you are ‘updating’ the context you know there can be leakage – for example:

var myScript = vm.createScript('someVariable = "test"; utilFacade.doLog("Hello World");');
myScript.runInNewContext(myContext);
 
var anotherScript = vm.createScript('utilFacade.doLog(someVariable);');
anotherScript.runInNewContext(myContext);

This will print out ‘test’ to the log. We could have just as easily replaced anything in the context, causing crazy unexpected behavior between executions. Additionally there are some other fundamental unsafe things about this – for instance, our script could consist of a never-ending loop, or a syntax error or similar issue that halts or causes the entire node instance to go into an infinite loop. In general, this simply is not a safe avenue for dealing with untrusted code. I’ve thought about the problem a bit and read some blogs on it, perhaps I’ll post something about what to do in such situation later.

For now, I would be remiss if I did not mention this “undocumented” method – not the new method used to create the context, and the associated call differences (passing in the context object instead).

var myContext = vm.createContext(myContext);
 
var myScript = vm.createScript('someVariable = "test"; utilFacade.doLog("Hello World");');
myScript.runInContext(myContext);
 
var anotherScript = vm.createScript('utilFacade.doLog(someVariable);');
anotherScript.runInContext(myContext);

If you are like me, you may be wondering ‘what is the point? it seems to work similar’ and as far as I can tell currently it pretty much operates the same in terms of functionality – I may be wrong on this point though in some specific use case, if so please feel free to drop a comment on it and I’ll update accordingly.

While functionally it seems the same, in reality something very different is occurring under the covers. To get an idea of what, precisely, I think it is worthwhile to consider this git commit somebody made which I think provides some useful reference:

https://gist.github.com/813257

For the lazy, here’s the code:

var vm = require('vm'),
   code = 'var square = n * n;',
   fn = new Function('n', code),
   script = vm.createScript(code),
   sandbox;
 n = 5;
 sandbox = { n: n };
 benchmark = function(title, funk) {
   var end, i, start;
   start = new Date;
   for (i = 0; i < 5000; i++) {
     funk();
   }
   end = new Date;
   console.log(title + ': ' + (end - start) + 'ms');
 }
 var ctx = vm.createContext(sandbox);
 benchmark('vm.runInThisContext', function() { vm.runInThisContext(code); });
 benchmark('vm.runInNewContext', function() { vm.runInNewContext(code, sandbox); });
 benchmark('script.runInThisContext', function() { script.runInThisContext(); });
 benchmark('script.runInNewContext', function() { script.runInNewContext(sandbox); });
 benchmark('script.runInContext', function() { script.runInContext(ctx); });
 benchmark('fn', function() { fn(n); });

This is a pretty simple benchmark script – there are some fundamental issues with it but it gives enough of a view that we can gauge a general sense of relative performance of various methods of executing the script. The script.* functions will use the pre-compiled script whereas the first two will compile at time of execution. The last item is a reference point. Executed on my machine, this gives me the following result:

vm.runInThisContext: 127ms
vm.runInNewContext: 1288ms
script.runInThisContext: 3ms
script.runInNewContext: 1110ms
script.runInContext: 23ms
fn: 0ms

So you can see that there are significant performance implications. The pre-compiled examples run faster than those that compile on the fly – no real surprise there – and if we were to increase the number of executions we would find this difference exacerbated. Additionally, we see something significant is happening different with the ‘runInContext’ and ‘runInThisContext’ vs ‘runInNewContext’. The difference being that runInNewContext does exactly what it says – it creates a new context based on the object being passed in. The other two methods use the already created context object, and we can see that there is quite a benefit inherent in this – creating a context is an expensive task.

Posted in Javascript, Node.js | Tagged , , , , , | 1 Comment

Simple Telnet Server in Node.js

I was playing with Node.js and started putting together a simple telnet server to learn a bit about it. I am going to walk through it in phases from “super simple” to more complex. But first, let’s ask ourselves – what the heck is a ‘telnet server’?

Telnet is a client server protocol based on a reliable connection-oriented transport, it is generally text based. Typically telnet is used today when connecting to unix based systems to do administration (though ssh has become more in favor of late, as it provides encryption), for administration of network elements, MUD style games played over the internet, etc. Typically a telnet connection is made over TCP, however telnet actually pre-dates TCP (it originally ran over NCP). For most purposes telnet has largely been replaced by SSH, which is more secure. That said, there is still a thriving community that uses it for text based interaction on the internet, and as one of the most basic protocols I felt it was a good place to learn how to use Node.js.

Some folks tend to confuse telnet with just raw TCP communication – and in fact, many telnet clients are used for testing commands against a TCP server (such as a web server utilizing HTTP). There are true raw TCP clients available today – such as netcat, socat, or PuTTY supports it on Windows. A few things differentiate telnet from raw TCP, namely certain rules around how to handle carriage returns and so forth. Also, telnet is not initially able to transmit ‘binary data’ as it is not ‘8-bit clean’ by default. You can negotiate 8-bit clean, which is precisely how FTP works.

In any case, enough theory, let’s get to some code. For my simple starting point I am going to begin with a simple TCP connection. Here’s some Javascript code for Node.js to do it:

var net = require('net');
 
var server = net.createServer(function (socket) {
	socket.write('Welcome to the Telnet server!');
}).listen(8888);

So the first line causes us to load the net module from node which we then use in the next statement. We call net.createServer() and pass an anonymous function to it. The function accepts a socket, and writes out a string to the socket. We then cause this server to begin listening.

Now this code looks pretty awful honestly – but it is how much of the Node.js tutorial code is written up. Let’s pretty it up though into something more manageable/understandable by a normal human being.

var net = require('net');
 
/*
 * Callback method executed when a new TCP socket is opened.
 */
function newSocket(socket) {
	socket.write('Welcome to the Telnet server!');
}
 
// Create a new server and provide a callback for when a connection occurs
var server = net.createServer(newSocket);
 
// Listen on port 8888
server.listen(8888);

So doing the same thing, but with comments and a named function defined separately and such. Put this in Node.js and run it, it will cause node to listen on port 8888. Should you then open a terminal window (or command line on Windows) and type telnet localhost 8888 you will see the message come from the server, and nothing else. Voila, you have now connected via telnet, very impressive.

We’ll find that the server does absolutely nothing when we send commands at it. In fact, there is no way to close the connection without killing telnet itself or issuing the escape character (probably ctrl-] in most clients). I would like to do something with the data, so let’s make a quick chat server as it were. I would like multiple people to be able to connect to my server (already handled for us by node) and anytime a person types something, I would like their text to be sent to other people – thankfully on the Node.js home page there is a short video where they do exactly this, so it isn’t exactly a great leap. We’ll need to listen for a ‘data’ event, and when we get data, we’ll want to send it out to everybody.

var net = require('net');
 
var sockets = [];
 
/*
 * Callback method executed when data is received from a socket
 */
function receiveData(data) {
	for(var i = 0; i<sockets.length; i++) {
		sockets[i].write(data);
	}
}
 
/*
 * Callback method executed when a new TCP socket is opened.
 */
function newSocket(socket) {
	sockets.push(socket);
	socket.write('Welcome to the Telnet server!\n');
	socket.on('data', function(data) {
		receiveData(data);
	})
}
 
// Create a new server and provide a callback for when a connection occurs
var server = net.createServer(newSocket);
 
// Listen on port 1337
server.listen(1337);

So a couple things. We created an array called sockets – this is to hold all the sockets that connect to our server. We then define a receiveData function – this is a call back which will be called any time data comes in from the socket. Our trusty newSocket method from before now registers the receiveData function as a callback for any time data is received. Using that array of sockets, we then send any data we receive out to all the sockets.

This is neat but there are several initial issues with it. First and most obvious, what happens when a socket disconnects? The answer is Bad Things(tm). We need to do something to remove the socket from the sockets array so that we do not try to send messages to a dead socket. So we add the following function:

/*
 * Method executed when a socket ends
 */
function closeSocket(socket) {
	var i = sockets.indexOf(socket);
	if (i != -1) {
		sockets.splice(i, 1);
	}
}

And we can then do this inside our newSocket method:

socket.on('end', function() {
	closeSocket(socket);
})

You can see here I used an anonymous function to call the function I wrote because I wanted to pass in the socket object explicitly. You may have noticed that I also did this for the data event, and the reason I did that is because I know that I also want to provide the socket to the receiveData method. Why? Because the second issue we face here is that data input is echoed back to the person who entered it in the first place. This is not optimal given the telnet interface we are using, and so we can correct this issue by sending to every socket except the one that sent us the data. We can modify receiveData as such:

function receiveData(socket, data) {
	for(var i = 0; i<sockets.length; i++) {
		if (sockets[i] !== socket) {
			sockets[i].write(data);
		}
	}
}

We also need to modify the newSocket function to have the socket as well as the data passed to the receiveData method. Here’s the full code as it stands after these modifications:

var net = require('net');
 
var sockets = [];
 
/*
 * Method executed when data is received from a socket
 */
function receiveData(socket, data) {
	for(var i = 0; i<sockets.length; i++) {
		if (sockets[i] !== socket) {
			sockets[i].write(data);
		}
	}
}
 
/*
 * Method executed when a socket ends
 */
function closeSocket(socket) {
	var i = sockets.indexOf(socket);
	if (i != -1) {
		sockets.splice(i, 1);
	}
}
 
/*
 * Callback method executed when a new TCP socket is opened.
 */
function newSocket(socket) {
	sockets.push(socket);
	socket.write('Welcome to the Telnet server!\n');
	socket.on('data', function(data) {
		receiveData(socket, data);
	})
	socket.on('end', function() {
		closeSocket(socket);
	})
}
 
// Create a new server and provide a callback for when a connection occurs
var server = net.createServer(newSocket);
 
// Listen on port 8888
server.listen(8888);

So that’s cool, but what if the person wants to quit communicating? They could kill their telnet client, or issue the escape character – but this seems less than optimal. Instead, lets give them a command they can type which will close the connection for them. We’ll need to look for a particular text entry that is typed to match against our command, and disconnect the socket if it is received. Here’s some naive code to do that:

if(data == "@quit") {
   socket.end('Goodbye!\n');
}

However, this actually won’t work, because what is actually sent to us is not just the text, but also the carriage return and newline that follows it. We need to strip these out – we’ll make a simple function that does this simplistically using regular expressions:

/*
 * Cleans the input of carriage return, newline
 */
function cleanInput(data) {
	return data.toString().replace(/(\r\n|\n|\r)/gm,"");
}

Now we simply need to call this function on our data and use the return for comparison against our command rather than the raw input. Here’s the complete code with this modification:

var net = require('net');
 
var sockets = [];
 
/*
 * Cleans the input of carriage return, newline
 */
function cleanInput(data) {
	return data.toString().replace(/(\r\n|\n|\r)/gm,"");
}
 
/*
 * Method executed when data is received from a socket
 */
function receiveData(socket, data) {
	var cleanData = cleanInput(data);
	if(cleanData === "@quit") {
		socket.end('Goodbye!\n');
	}
	else {
		for(var i = 0; i<sockets.length; i++) {
			if (sockets[i] !== socket) {
				sockets[i].write(data);
			}
		}
	}
}
 
/*
 * Method executed when a socket ends
 */
function closeSocket(socket) {
	var i = sockets.indexOf(socket);
	if (i != -1) {
		sockets.splice(i, 1);
	}
}
 
/*
 * Callback method executed when a new TCP socket is opened.
 */
function newSocket(socket) {
	sockets.push(socket);
	socket.write('Welcome to the Telnet server!\n');
	socket.on('data', function(data) {
		receiveData(socket, data);
	})
	socket.on('end', function() {
		closeSocket(socket);
	})
}
 
// Create a new server and provide a callback for when a connection occurs
var server = net.createServer(newSocket);
 
// Listen on port 8888
server.listen(8888);

So simple enough – a little telnet chat server. In actuality, it is more of a TCP chat server, as there is a lot more to do, and obviously it is very naive and brittle to error – but that is for another day.

Posted in Javascript, Node.js | Tagged , , , , , | 20 Comments

Installing Node.js

An acquaintance of mine noticed that I put together the previous post regarding Javascript inheritance in Node.js, and asked what that was. When I explained, the next question of course was how to install it. I don’t want to reinvent the wheel here, so here are some links on Node.js.

Node.js Home Page

Node.js Install Instructions

 

Posted in Javascript, Node.js | Tagged , , , , , | Leave a comment