Thursday, February 21, 2013

PowerShell Quirks

This morning I had to figure out how to create 45 insert statements in order to populate a new table in oracle from existing data.
The table is a cross-reference table and only consists of 2 columns that need to be populated by the data . Using an Excel spreadsheet I had created a two column CSV file by hand, which was a long and tedious process. I did not want to duplicate this process and since I had it in a friendly format surely there was an easier way.

After experimenting with the IDE I was using I found that I could not simply import the data because it would either destroy the table and recreate it with only those two columns (destroying the columns that I needed to hold the create date, created by, modified date and modified by values) or it would reject the data because it did not contain enough columns in which case I might as well go back to writing each individual SQL statement by hand.

I thought about writing a macro in Vim to accomplish this which is certainly doable but frankly, it’s been a while since I wrote anything remotely that complex in Vim so I dismissed that idea as too time consuming.

Then I realized that this should be trivial in PowerShell and the solution could be used universally by everyone in the development department in a similar situation.  Indeed, creating the script that would generate the SQL statements was trivial.

An Import-Csv filename created a hash array with each row in my CSV file, a single variable with tokens for the two items I wanted to replace, an Out-File and a foreach loop later and it was done. Coding is awesome!!

What I didn't know was that PowerShell gets quirky with carriage returns and newlines when using Out-File to write to a file. No matter what I tried, I always wound up with a file containing a single line with forty-five SQL statements. After beating my head against it for a while I finally realized I was hurting myself. The simple way to do this (although not elegant) is to simply use the Write-Output string command and pipe that to the Out-File command using the append flag.
The write-output command always adds a windows newline character to the end of the string but since it’s piped to the Out-File it goes to the file rather than the console where it normally goes.

Just something to be aware of if you haven’t played with writing to files in PowerShell. If you’re aware of that issue and have your data in a CSV format, writing this script to create strings that can be tokenized takes about 2 minutes.

I agree with Scott Muc. Many things Microsoft leave a bad taste in my mouth but PowerShell typically isn't one of those.

Wednesday, February 20, 2013


Over the course of time there is an inevitable conversation that arises, usually on a periodic basis, about standardizing on particular development methodologies and frameworks. Just so I'm clear, I am going to refer to everything a developer uses to accomplish his specific task/project is a tool, from the IDE to the management methodology to the frameworks. I'm also speaking in general of companies that do internal development but don't actually create software for retail sales, i.e., they don't make any profit off of development.

I'm going to start with an observation. Everyone that I can ever recall having this conversation with, without exception, have all agreed that one of the issues with development, especially at large companies, is the corporate tendency to try and impose on developers a standardized framework for each problem. Everyone see's this all the time in whatever department they are working in.

Just about everyone could come up with a story of having been in a situation where something was mandated from above that they had to live with. If you were lucky, you got a voice in the decision making process but regardless of if your favorite shiny object came out on top of the decision, you had to live with it.

They will start with statements similar to the ones below.
We need to settle on a single database and make sure that everything uses that particular DB. We need to standardize on SVN/Git/Mercurial/TFS and make sure that everything gets moved into that repository so that we can reap the benefits. We need to pick a single IDE and everyone gets to stick with it. And on and on.

In my personal experience, eventually everyone jumps on board the push for standardization, if for no other reason than they hate the product that seems to be winning. Despite the fact that these same people lament this tendency by business because they really believe that the right tool for the right job is the correct way to code!!
In other words, in almost all cases, your colleagues, peers, managers all truly believe that the task at hand should dictate the tools you use to accomplish that task as efficiently and effectively as possible.

Now, there are very valid reasons for standardizing policies, processes and whatnot out there. I adore standards, truth be told. And if I am the business and my bottom line is profit, there are even more reasons. I can leverage everything from economics of scale to product support by purchasing a single product and sticking with it. This is certainly a very large concern and an effective factor that encourages business to push strongly standardization.

Even developers in the trenches have a tendency to push for standardized frameworks and tools because most have a favorite and learning new ones can be viewed as a painful struggle which wastes their valuable time and is ultimately pointless since they could have accomplished their goal in a fraction of the time with the tools they are comfortable using. Few people willingly stretch far outside their comfort zone, even fewer will do it while under the stress of deadlines and, just a guess here, even fewer managers will want to defend the developers for doing so because that means they're productivity is diminishing because they fighting a learning curve.

Now let's step back into the theoretical world of developing software solutions for a minute. As a business my goal is to provide a software solution that generates the greatest amount of satisfaction and benefit for the consumer as possible so that I can generate as much desire for my product, and hence profit, as possible. As a developer my goal is to create a solution that meets the customers needs, is easy to use and generally exceeds their expectations if at all possible. There are other goals involved for everyone, to be sure, but they aren't my point so I'm being a little vague here.

If the corporate standards dictate that I must use a particular technology tool to accomplish my task and my tasks are widely varied, sooner or later, (usually sooner) I'm going to have a task in which I cannot deliver the best software that I can because I won't be able to use the appropriate tool(s).

Is a SharePoint installation really the best solution for a given task? It makes perfect sense to leverage the corporate SharePoint installation if the task lends itself to that environment but what if it doesn't?
Do I really need a full blown MS SQL Server or Oracle installation for every task that requires a DB back end?  What if it needs to be used by some guy in the field that has no access to an internet connection?
Is the best tool for creating a data access layer really NHibernate when the model is trivially simple? Could't I just use Entity Framework and if the application grows swap out to a more appropriately complex tool like NHibernate later when it's actually needed?

What happened to that lamentation that we all shared that the best tool for the task should be used rather than some standard that was decided upon?

Back to the NHibernate vs EF debate (cause that was what was under discussion by my coworkers earlier today) Why do we have to choose between NHibernate and EF?

In my opinion, there should be room for a choice of the most appropriate tool. Entity Framework are to Lincoln Logs what NHibernate is to Lego's. No one is saying that Lincoln Logs should be abandoned by children for Lego's because Lego's are superior. They both have their place. They are both the right tools provided you're working on the right job.

The biggest argument is that it's inefficient to use multiple tools that perform the same function and I would agree with that up to a point. If the tools do the exact same job the exact same way then maybe you have an argument for standardization assuming standardization actually generates real benefit in the long run.

A common example is the IDE. A standardized IDE has been something I've seen pushed over a large number of businesses. My question is, why? What do we gain? Typically the answer is, IDE specific issues will arise. If Bud checks in his development work and he's written it using VS2012 and Bob checks it out and opens the files in say Eclipse, the project files that are being used VS will make no sense and Bob can't compile the projects. Or there are compatibility issues between versions of IDE XYZ so we all need to be using not only the same IDE but the same version.

My answer is the same; why? It takes some effort to ensure that everyone is using the same build process but if everyone is truly using the same standard build process, why can't Bob compile Buds project? Other than convenience for offline work, why are the project files even in the repository? Chef, Maven, TFS, etc, don't care about your project files (or at least they shouldn't IMO). They exist to provide an independent and impartial build environment open to whoever is allowed to request a build.

Another example, most of our developers are familiar with Ruby, we don't want to learn C# or Scala, the paradigms are too different, it will require lots of extra effort to train them, it will be more expensive and time consuming to build the software if we do it that way.

The above answers are correct but most people ignore what they will have gained by it. Exposure to new technologies, techniques, frameworks - the things that are the meat and potatoes of your daily work, is one of the primary ways we grow as developers! Good developers will get better, bad developers will get good and everyone's selection of appropriate tools will have increased. This is never a bad thing!
There are valid reasons that this approach won't work on every project, obviously, but the benefits of learning new things in a group environment are well documented. Here is the first thing I came across when I Googled, 'learning in a group environment'. Cooperative and Collaborative Learning

If it works for kids in a classroom, why wouldn't it work for adults? Isn't that one of the lessons that agile programming and methodologies tell us over and over? Group collaborative learning is good? Surely you can find at least one project a year that is small enough to benefit from this approach. The more often you do it, the easier it will become to do it again, maybe on larger projects. And each time, the tools that the development teams are familiar with grow.

Just my two cents.

Tuesday, February 5, 2013

Adventures in .NET

So today I am going to try and create a proof of concept application using VS2012 and a blank Lightswitch project. I'm interested to see how that goes. coming from the Java OSS side of life this will be my first attempt to build anything with the Lightswitch framework, .NET4.5 and my first application (as opposed to a simple service) with any of the above.

Let's see how it goes.

First hurdle, I have install the Oracle Developers Toolkit in order to get any kind of usefulness out of any of the frameworks. This already involves two things for me run the range of mildly annoying to rage induced blindness; installers and oracle drivers.

Having already gotten off to a stellar start and having plenty of time before the installer finishes, let me mention a couple of things that irritate me about both of those.

1) I know that drivers and frameworks and all that jazz for databases grow over time and can take up a lot of space but, come on, folks, why is my client side only install taking up more space than the actual frameworks that they are supporting. Really? REALLY??!?!?
Since when did a driver, even a development interface, become an instrument of the almighty advertisement? It really offends me that when all I want, care about or will ever use is a simple driver that *might* take up a few megabytes of disk space comes so bloated with crap that companies are desperate to get me to use that when the install is finished I have lost tens of gigabytes of disk space! Or maybe I'm narcissistic, maybe they don't want me to use any of that crap at all, maybe they just want to use me to pad the numbers they hand to declare how popular their widget(s) are by counting me in because I installed it. OK, now that I've contemplated that, I'm still offended! The end result is the same. I'm losing time, space and hence money because they want to saddle me with everything and the kitchen sink when all I wanted was a damned driver.

2) Why in the hell do they give me the option to choose my installation directory and then ignore my request? Seriously, (almost) every installer asks me where I would like to install my software to but once we get out of the tiny stuff and into things like Java, Oracle clients, and my all time favorite VS2012, the installer treats it as a strong suggestion at best and outright ignores it at worst. When I install VS2012 and it asks me where I want it to go and I say someplace on the F drive (my data drive) it will still install hundreds of gigabytes on the C drive, far more than what it installs on the F drive.
I have a small boot drive, not by choice, but because that's what was passed out to me when I first came on board here. When I got around to installing VS on my machine in spring of last year, it would't fit. I didn't have enough gigabytes on my boot drive to install even the bare minimum. Really? My IDE HAS to have access to the C: drive in order to function?? And if thats not the case, why are you ignoring the install directory preferences that you just solicited from your user?? Oracle does the same thing although not to nearly that extreme.

OK, looks like finally, close to an hour later, I am getting close to actually finishing the oracle install. I can't wait to see the next hurdle.

Friday, January 11, 2013

Must Remember to Write, Even If I Don't Think It's Worth It

Since I haven't post an entry in a bit I thought I would throw something up.

My first Coursera class of the year starts next week and I'm looking forward to it. It should give me a change to reinforce the budding FP skills that the Scala class started me on since the class will be a comparative look at programming languages with the languages in use being ML, Racket and Ruby. Should be an interesting.

I am still loving the new RC helicopter hobby. In an unexpected twist I think I get as much pleasure out of repairing the little sucker when I crash it as I do actually flying it. Definitely not what I was expecting on several levels. I am guessing that this will balance out once my flying skills start to catch up with my repair skills. Although, to be fair, I have only had to make three repairs. On the other hand, I have screwed around and made two sets of training gear, taken a stab at leveling the swashplate by eyeball (I really need a swashplate level to do it properly) and I'm about to try my luck at setting up the blade tracking and blade balancing.
Which reminds me, I should go by HobbyTown and pick up some tracking tape and a blade balance. One of the guys that got me started on this new hobby is convinced I'm going to wind up getting a quad flyer at some point in the sooner-rather-than-later future. I have to admit, those things are *FUN*. Although he says he finds it more difficult to handle the quad, I personally think its a dream to fly. Goofy easy to control compared to my mCPx heli and almost as maneuverable. It also has a much larger battery for longer flight times and is still capable of doing loops (I haven't checked but I think it should be able to do rolls as well).

I may pick one up just because some of the motor skills should be transferable to my heli and it will give me confidence when I get discouraged at my progress on the mCPx that if I can do it on the quad, I can do it on the heli. As a bonus, I would be pretty confident flying it around the house and torturing the animals with it!

OK, I do feel better about posting since it's part of my plan to improve fundamental skills and obviously, I do need it.

Wednesday, January 2, 2013

The New Year Has Begun

It's clearly been a while since I put up a post. This has been due as much to the holidays as much as it has been caused by distractions, lack of laziness and procrastination.

I've was busy with a production deployment in early December and have been on vacation enjoying the holidays with family and friends after that. Today is my first day back at the office in a couple of weeks and while it's good to be back, I'm still sorta glad it's a short week. It was hard getting up this morning and making the ride in with the temp at 37 degrees and my wife still warm in bed.

It was nice to see a couple of good clean blog entries by Uncle Bob giving a great introduction into Functional Programming and why it's supposed to be, "The Next Big Thing". I personally thing it's past that point already, it's just that the majority of people don't necessarily realize it is or why everyone is saying that. The relevant posts are FP Basics E1 and FP Basics E2.

Also on my radar this month will be another Coursera class. It doesn't look like it will be particularly challenging but it's an introduction to a range of different programming languages functional as well as OO, most of which I have spent little, if any, time doing anything more significant than the old standby, "Hello World!" I look forward to learning how to make a program say that in a few more languages and with a little luck and some work, I can get a solid handle on some new tools that I will be able to use effectively and appropriately.

I've also picked up RC helicopter flying. As in most things I do, I kind of jumped in over my head and while I didn't spend a ton of money, what I did do is overestimate my skills (again) and get a heli that is, in all honesty, not something I should be trying to learn on as a beginner. I cheer myself up with the knowledge that if I can conquer this bird, I will have made a significant achievement (well, to me anyway) and that I actually love just practicing ground exercises, much less the low hover stuff that I occasionally use to remind myself why I don't get the bird in the air until I've mastered other skills first.

I also have at least a couple of interesting tasks at the office to work on in addition to the dreary crap that normally makes up a work day. Neither are really sophisticated but one of them will afford me the opportunity to learn and explore some new tools and frameworks. For a guy who has been a die hard OSS and the POSIX-style OS, this is extremely important now that I have gone to the Dark Side and work as a perm at a Windows based company.
As an aside, I have been very impressed over the last few years with how Microsoft has started opening up their software but also started to adopt and promote things that I have considered industry standard since the early 2000's but MS refused to acknowledge.

I will try to organize my thoughts and present some cogent observations on my introduction into the universe of C# web application frameworks and, if I'm lucky, maybe a little bit about how F# plays within them.

Friday, December 7, 2012

Production Mistakes

Once upon a time I had a very bad scare. One of my responsibilities was to maintain a vendor created application that handles the flow of commodity trades on an exchange into our risk management system. That particular aspect of my job was not my favorite part but someone had to do it, right?

This vendor application was primarily a configuration driven monstrosity. Messages are in XML and the conversion from the format that the exchange provided us with to the schema that our back end systems required was primarily done using a tag mapping approach. For example, if the exchange provides us with an XML messages and the counterparty in the deal was represented as an ID, say 12345, the application then maps that number to a mnemonic that is used by our system, say BGKG. Simple, right?

Yes, it is very simple. It also doesn't scale for a damn. There were around 500-ish of these mappings in production. And that's just the tag mapping portion. That doesn't take into account any business logic driven mappings, XSL files that mangle the XML, etc. This quickly becomes a pain point for maintenance, especially when there are potentially multiple different places for logic to hide on any given mapping. The logic could be in the XSL outside the application, the message logic built into the application, there are templates for each message route that can contain the logic, and several more places. On top of this there was nothing preventing the spread of logic affecting the same element(s) from being shared among ALL of these locations!

Now that you have more background than you care about, I will actually get to the scary part. When I first started working on this, the traders that actually execute the trades on the exchange have an ID assigned by the exchange. The means of mapping this to our system ID was to have a list of all the traders exchange ID's in an XSL file and test for a match, replacing the exchange ID with the local system ID. This is ugly for several reasons. First, it's long, tedious and error prone to maintain this list. I don't know when the last time it was updated before I saw it but I know that they never removed anyone from that list, ever. Even if someone left the company, they stayed there. Second, this means that the Risk Management group (the business unit owner of the application) was dependent on IT development (i.e., me) to make a code change, promote it from the development environment through the QA environment to the production environment every time they wanted to add a trader to the system. This can become painful for a company that grows through acquisition on a regular basis, by the way. I could go on but you get the point and I'm sure you can come up with your own reasons I haven't thought of as to why this is a bad way to handle it.

I wanted to empower the business users of this application to be able to manage their own traders in the system. They do it for all the other aspects and this is supposed to be configurable by the business users anyway. Configuration changes don't have to go through the same channels as development and I don't want a call while on vacation to run something like this through development to production because they can't do it themselves.

I started by creating a simple DB schema that holds look up tables exchange IDs to our system IDs (and a few other things). Nothing complicated but if I put that kind of mapping into a database, I can then stand a web front end up on it and make that available to the users. Tada!! Now they can configure to their hearts content without bothering me.

Now, life being what it is, while I converted the mappings to use the database and had promoted it to production several months earlier, I had not set up a web front end yet.

We were testing changes in QA that would allow us to trade financial instruments that we had never traded before. Obviously we needed to test and make sure that the new trade types worked as expected but also to do some regression testing and make sure that these changes hadn't broken existing trades. We were getting some odd results from the new trades in QA so I went investigating. After discounting all the obvious answers, I went back to basics and looked again at them. It turns out that the configuration of the data source that was being used for the table look ups was not providing the right answer because it was pointed at the wrong database. It was still looking at the development environment.

I had a sudden cold chill and after the shivers stopped I dared to look at the configuration in production and saw that yes, indeed, production was pointed to the development environment. I thought I might have a heart attack. Now, kids, this is emphatically NOT anything you ever want to have happen to you. Bear in mind that this industry was heavily regulated by SOX. Any idea what would happen if a SOX auditor got wind of something like that? Phrases like, 'career changing learning experience', spring immediately to mind.

Now, the good news is that all is well that ends well. Everything was fine, we got it all changed without a hiccup and life went on. The part that scares me is what might have happened. I mean, this is development, man. If had taken it into my head at any point during those intervening months to blow away the entire database, much less wipe a table (both of which I can, will and are perfectly valid things to do in development) what kind of damage would have been done? Fortunately, not a lot because recovery would have been simple (thank GOD) but how long would it have taken for us to figure out what happened? It took me a couple of days of poking around to finally decide I should go back to basics and check the data source configuration at the application server level.

My point here is, that mistake made me want to crap my pants. It was amateur hour kind of stuff that I should never have let happen. Even if I wasn't the one actually handling the deployment, I sit over the shoulder of the SA and DBA's for almost every production deployment I make and I should have seen it, thought about it or something! But despite all that that, it was good for me in the end. Like most everyone who has ever done something as boneheaded as that, you can be sure I will be a damned sight more careful with my deployment instructions and double-checking both my and the deployment engineers work. It also reminded me that while things like vendor applications are a fact of life sometimes and you don't have an actual build process for them, there is nothing preventing you from creating a deployment build for it. I might not have to compile code but I can damned sure set up the build server to check out a tokenized version of the any configuration files that have changed and have those tokens replaced with the proper environment settings for whatever environment is being promoted to. Also, you can set up such a build so that it breaks on deployment if you forgot to put in the environments DB password, for example. You really don't store password in your source code repository right? Right??

The only real defenses against mistakes like these are to be disciplined and diligent in your pursuit of the perfect build process to automate these tedious things and/or to document them. I suck at documentation but it would have saved much time and headache if I didn't. Obviously, I was not diligent or disciplined enough in my build process, mostly because until that event happened, I really didn't think of it as a build process. My only defense is that working with vendor products is not something that I have had a lot of experience at. I mean, create and deploy a JBoss server with whatever configuration files it needs but that was always in the context of writing actual code. This puts my brain in a completely different mindset than I was in when I was thinking, how can I configure this vendor application in a more useful fashion.

Ah well, live and learn.

Monday, December 3, 2012

Resurrection of a Classic

I had originally intended to limit my posts to just development related topics but I have found that I don't always feel I have something remotely interesting to say about that even once a week so I am taking a moment to post about something else, namely, my current favorite game.

I loved the original X-COM: UFO Defense published by MicroProse in the mid-nineties. I was in the military  at the time and living in the barracks. There were only a couple of guys that had a PC and the games always turned into a group effort within our circle of friends. Four or five of us would gather around the PC, drink beer and at each turn of the game we would swap drivers and someone else would deploy the squad while the others either cheered them on, badger them at what we thought was a poor move or drop our jaws when the completely unexpected would happen.

This game had a LOT going for it. It was detailed, complex, and especially at the later stages of the game, quite time consuming to control each of the 12 team members you could send on a given mission. We played through it any number of times as well as the follow up games that were published. We all gained immense enjoyment out of it and I remembered it fondly for many years.

You can imagine my reaction when I heard that X-COM was getting a reboot over fifteen years later and not only was it available for the PC but for consoles as well. I was overjoyed at the idea that my beloved game was going to be resurrected using technologies that were hard to imagine at the time but I was also extremely worried. I had no idea if the game that I remembered so well would be recognizable to me any longer.

I am happy to say that the fear was misplaced and the game that I remembered, while not exactly what it was, was still, at heart, the same game I fell in love with. They managed to maintain the same feel that I had in the first game while at the same time streamlining the controls, (and though I hate to admit it) the tedium of having to take care of so many details for so many characters on a team.

The transition was very cleverly accomplished. You still have resources, research, engineering, facilities and soldiers that need to be developed and managed, just like the original but the menu structures and controls for handling soldiers in mission have been streamlined and well thought out.

Maybe it's just my imagination but the amount of items that can be developed and created seems to be more limited than the original. For example, I was surprised to discover that there was only one aircraft, a fighter, that could be developed in the game using alien technologies. The original had a number of different aircraft both fighters and troop transports, that could be developed and built. It was disappointed to know that I couldn't develop a superior troop transport and that I would forever be limited to a maximum of six troops on any given mission.

The limitation on the number of troops concerned me because at the later stages, in the original at least, I don't think you could have reasonably completed the game. I was afraid that the game was going to abruptly become either too soft or too difficult in the later stages.

Thankfully, I needn't have worried. In the reboot, the game designers have overcome this lack of firepower in the field by added some features that both simplified the development of soldiers and balanced the gameplay between the large numbers of enemy troops and the limited squad size of your team.

In the original, if you wanted to outfit any given soldier with the proper gear, you had to view that soldiers complete stat and skill list, from the demolition (for rockets) or throwing skills (for grenades) to the stamina stat (how many things the soldier could do on a single turn) to the strength of a given soldier (how much gear can they carry before they just can't move).

The reboot introduces a class system for each solider. After the first promotion they are given a class of heavy, assault, support or sniper. Each class has it's own skill tree and each skill tree has two separate but complimentary branches that you can choose between as they advance. This both simplifies soldier development (I don't have to worry about developing a soldiers strength to carry a larger weapon) and load out (only the heavy can carry a rocket launcher). At the same time, the skill trees give each class distinct advantages that allow each soldier to easily overcome one or two aliens if played to the strengths of its class.

The engineering foundry also contributes to this, allowing engineering research that will provide across the board improvements to equipment or the development of an entirely new branch of technologies.

The fifteen plus years of hardware improvements in consoles (I play on my PS3) are pretty self-explanatory. Obviously the graphics are better but also the way the cinematics are worked into a soldiers (or aliens) actions, as well as the cut scenes are extremely well done and lend flavor to the game without getting in the way.

All in all, I would say that X-COM: Enemy Unknown is a huge win for Firaxis and 2K Games and, thankfully, me.