Skip to main content

On my first real job P1

This is an interesting trip to memory lane on 2002 era of software development in Mexico. In retrospective this particular environment molded my mind towards always finding ways to improve developer productivity.

Memories from 2002

While still on college I was fortunate to land a spot as a contractor for a technology migration of an existing system which meant replacing the basic architecture of a Client-Server to a Web Based solution while having the supporting back-end services mostly unaware of the difference on the client side.

It was a long time ago so technology standards were quite different, the original client was a Visual Basic 6 application which had already endured multiple iterations; the web based implementation was a Java Servlets and JSP application which was mostly developed by junior or trainee developers like myself and a couple more senior developers. It was mostly a translation of VB6 code to JSP and a sprinkle of Javascript that could run on Netscape Communicator... yes they were odd times to be coding. Here be demons of a hopefully long forgotten era.

I say I was fortunate as we had to quickly learn from our mistakes and start changing our way of doing things pretty often as well as the tools that we could adapt. 

JSP at first were used mostly like PHP pages with both user interface, javascript and in some places even SQL sprinkled in odd places. In part this smell was driven by the need to avoid to have to do restarts of the java server in order to be able to use newer code which took 5 or more minutes in which no development could be made. All developers used the same server so to avoid wasted time all started to use JSPs which reloaded automatically instead of classes. But this had an adverse problem, even though it speed up the development cycle it caused resource leaks for the whole server which on every required restart took even longer. 

At this point we started doing things differently, we tried to run code on our local machines but it took a massive effort to configure a servlet container to replicate most of the expected environment while not killing our machines. At least now we could code the UI on local and mock some of the back end services. It was well spent time as it made us faster by not stepping on other developers and we could plan the expensive restarts of server on the hours everyone was out for lunch or just before and after work hours.

Also compilation initially was done completely on the server but once started doing things locally we could validate files and potentially run test cases against them in local! We started creating ANT scripts in order to do so. We started dabbling on JUnit but we couldn't push the test cases to the server as the libraries were not available there so we had a broken process where the server had only production code but our locals had production plus our test cases. This made evident another big problem, we didn't use version control for our daily work. Version control was used at deployable artifact level once we wanted to release something to the next environment. I'm sad to say that we couldn't solve this versioning problem with the particular client but we found workarounds that kind of made our lives easier and safer by zipping everything at regular intervals, yes it is horrible but we were really Jr Devs trying to survive with little guidance.

Javascript then was split to independent files and could be reused by all developers, we started specializing. Bear in mind that there were almost no framework or library that we could use at that point as Struts was still on beta versions, we were hand coding pagination, in some cases even html generation from Javascript without using DOM (still shiver at this).

But even after all these improvements I think the one that made us more productive was when we started treating the project not as a direct translation from language to language but when we understood that what we had to replicate was the functionality exposed to the end user, not even trying to replicate the patterns and coding practices that the original application had in place. 

Effectively the first months we were just doing copy/paste/mutate and we were bringing along any vices that the original code had plus our own errors and differences in architecture. The original code could asume that there was only a single user interacting with it while on the web it had all users from different tenants at the same time. 

Once this thing clicked on us we went on an finished the modules we had on the current release and hastily went back and rewrote all things we did on the prior months; we were very thorough but with this new vision we finished the prior modules in 2 weeks which had taken at least 3 months with even more developers than what we had at that point. This meant that we did the same amount of work, faster, better with even less developers and the client noticed this change of pace and way of doing things.

Final note

I believe this recognition by the client and the contractor org made my mind clear that working smart is really the only way to work. Removing waste, removing duplication, removing collisions between developers and reducing the feedback loop. There has been lots of books and reports by smarter people than me but I was fortunate enough to find real experience with this on my first gig and to be a part of a self-improvement driven by developers and not by upper management nor process.

Popular Posts

Logffillingitis

I'm not against of leaving a trace log of everything that happens on a project what I'm completely against is filling documents for the sake of filling documents. Some software houses that are on the CMMI trail insist that in order to keep or to re validate their current level they need all their artifacts in order but what is missing from that picture is that sometimes it becomes quite a time waster just filling a 5 page word document or an spreadsheet which is just not adequate for the task needed. Perhaps those artifacts cover required aspects at a high degree but they stop being usable after a while either by being hard to fill on a quick and easy manner by someone with required skills and knowledge or they completely miss the target audience of the artifact. Other possibility is that each artifact needs to be reworked every few days apart to get some kind of report or to get current project status and those tasks are currently done by a human instead of being automated.

Are we truly engineers? or just a bunch of hacks...

I've found some things that I simply refuse to work without. Public, Centralized requirements visible to all parties involved. I is ridiculous that we still don't have such repository of information available,  there is not a sane way to assign an identifier to the requirements. Then we go with the 'it is all on Microsoft Office documents' hell which are not kept up to date and which prompts my next entry. Version control. When we arrived here quite a lot of groups were working on windows shared folders... now it is a combination of tools but heck at least there is now version control. Controlled environments and infrastructure. Boy... did I tell you that we are using APIs and tools that are out of support? Continuous deployment. First time here, to assemble a deliverable artifact took 1-2 human days... when it should have been 20 minutes of machine time. And it took 1 week to install said artifact on a previously working environment. And some other things that

Qualifications on IT projects. Random thoughts

Projects exceed their estimates both in cost and time. Why? Bad estimation would be an initial thought. If you know your estimates will be off by a wide margin is it possible to minimize the range? Common practice dictates to get better estimates which means get the problem broken down to smaller measurable units, estimate each of them, aggregate results and add a magic number to the total estimate. What if instead of trying to get more accurate estimates we focused on getting more predictable work outcomes? What are the common causes of estimation failure: Difficult problem to solve / Too big problem to solve Problems in comunication Late detection of inconsistencies Underqualified staff Unknown. I'd wager that having underqualified staff is perhaps the most underestimated cause of projects going the way of the dodo. If a problem is too complicated why tackle it with 30 interns and just one senior developer? If it is not complicated but big enough why try to dumb it down a

Job interviews

So after my sabatic period I started to go to different job interviews (most of them thanks to my fellow colleages whom I can't thank enough) and after most of them I feel a little weird. Everyone tries to get the best people by every means possible but then somethin is quite not right. Maybe they ask wrong questions, ask for too much and are willing to give to little in return or just plain don't know what they want or what they need. Our field is filled with lots of buzzwords and it is obvious that some people manage to get jobs only by putting them on their résumé. Then there are some places where there is a bigger filter and filters out some of the boasters. But still it is a question of what do they really need and what questions are needed to weed out those that do not cover minimal aspects required by the job. Don't get me wrong, it is really hard to identify good developers on an interview. It seems that almost no one knows what to ask in order to get insights abo