You are in a maze of twisty passages, all alike.
RSS icon Email icon Home icon
  • I still like microservices

    Posted on March 17th, 2025 Finster No comments

    In recent years, the “microservices” approach to service-oriented architectures has faced significant backlash, some of it deservedly so. Tales of Uber’s nightmare of jumbled tech stacks notwithstanding, I believe there is still value in this approach, especially since Kubernetes persistently remains a thing in DevOps.

    I think a big strength of the microservice approach has more to do with developer quality-of-life than anything related to site reliability or CI/CD. I know this is well-trodden ground, but recent experience has reinforced some basic truths that companies still manage to screw up in hilarious fashion.

    Trying to rewrite your entire platform in one fell swoop is the wrong approach. It’s the classic rewrite-vs.-refactor debate when it comes to paying down tech debt. The better solution is to systematically rewrite bite-sized portions and carefully replace aging legacy code in as small chunks as you can get away with. Inability to take the better approach is almost always a failure of management, not of developers, by the way.

    I encourage the interested reader to check out the sad, cautionary tale of Sonos’ new app roll-out. (For extra credit, check out what happened with the launch of EA’s Battlefield 2042.) Sonos’ platform was saddled with over 20 years of unpaid tech debt. That by itself communicates volumes about the quality of Sonos management. It’s a common trope, though, and certainly not isolated to Sonos.

    The disease is one of managers that, in one form or another, are defined by what products and new features they can roll out. After all, if there are no new features, how will we know managers are doing anything at all? Google has become a running joke of failed product launches because, as rumor has it, managers are allegedly compensated based on launching products, rather than making sure they remain usable for the poor consumers that end up relying on them.

    I pity the poor souls (many of them business-owners) that had the rug pulled when they lost critical functionality because of the Sonos rollout. Could this sadness have been avoided? Yes, but it takes managers willing to push pause on feature rollouts and create windows where tech debt can be addressed by the dev teams involved in building and maintaining said debt.

    In my estimation, if a platform is built on microservices, you already possess a map to all the regions of your codebase where tech debt can be addressed. And not only do you have the map, but if your dev teams made sure to keep services loosely coupled and minimized leaky abstractions, then you can easily address specific services in a systematic approach that avoids problems endemic to “The One Big Rewrite”. Features don’t get dropped on the floor, but are updated and replaced in manageable chunks.

    I have personally seen several companies go down the path of rewriting a content pipeline or a supply chain management package, trying to solve everything for everyone. These projects are always monolithic in approach, go overbudget for time and money, and inevitably miss important basic functions or pain points that users actually need solutions for.

    This seems so obvious. I feel like a kindergarten teacher trying to explain, “by the way, isn’t it great that 2+2=4?” And yet every year another major corporation is crippled by managers who don’t know how to properly manage tech debt. Profits evaporate, margins disappear, and ultimately jobs are lost.

    Well, what if your codebase isn’t so cleanly delineated? What if spaghetti is everywhere and there is no easy road to refactoring? You can only eat an elephant sandwich one bite at a time. If you notice the symptoms of too much tech debt, like increasing time needed to build a new feature or major fires started whenever a new version is released to production, you’re already drowning. But it’s never too late to take a step back and sit your dev teams down and start building and prioritizing tech debt tasks. I guarantee every dev that has spent any amount of time with a particular set of code will absolutely have a laundry list of pain points that they would love to solve.

    If the bleeding is especially severe, meaning everything is currently on fire because of too many shaky rollouts, enforce a code freeze immediately. Identify and triage all of the breaking issues (i.e., what’s generating Fatal/Error logs in production) and get them fixed. Then, once production is stable, try to attack the debt. A good approach is to have your devs meet maybe once a month and brainstorm what tech debt needs to be addressed in their sphere of influence. Write down everything and prioritize items into “Must Do”, “Should Do”, “Would Be Nice To Do”, and “Haha Yeah Sure”. Must Do items are things that have to be addressed before any more feature work should even be considered. Then, start including those items in your sprints and Kanban boards.

    In the end, the key to successfully managing tech debt lies in recognizing its inevitability and proactively addressing it with a clear, systematic strategy—whether through the inherent advantages of microservices or a disciplined refactoring approach for less modular codebases. By empowering developers to tackle pain points, fostering a culture that values long-term stability over short-term feature churn, and holding managers accountable for sustainable progress rather than flashy launches, companies can avoid the catastrophic pitfalls that have ensnared the likes of Sonos and EA. It’s not rocket science; it’s just disciplined, intentional engineering paired with leadership that prioritizes the health of the platform—and the sanity of its users—over the allure of the next big thing. Ignore this at your peril, because unchecked tech debt doesn’t just slow you down; it can sink you entirely.
  • Test-Driven Development and me: NUnit vs. MSTest

    Posted on February 18th, 2014 Finster No comments

    We’re at a point in our development where I work that we’re re-evaluating our continuous integration process. Of course, like any good development group, this means thinking about our unit/integration testing.

    When I first joined this team, there really weren’t any good tests. There were a few tests that were doing things like testing setters and getters on various object properties, but most methods had no test coverage whatsoever. So, I dumped the property tests and had my team get to work implementing what we called “unit tests” but what actually ended up being integration tests. These tests would hit endpoints of our API’s and make sure the output was as expected. This was partially due to expediency, i.e. getting something in place so that we could have a battery of tests to run before we did check-ins. It was also partially due to wanting to avoid an onerous collection of tests that we spent more time maintaining than our actual code. Our architect has been down that road with past companies he’s worked for, so we were strongly cautioned against having a huge convoluted unit test base.

    Anyway, now that we’re trying to flesh out our continuing integration process, I’m taking the opportunity to really re-evaluate our testing infrastructure and patterns. A big part of that is going to be researching NUnit and seeing if we want to use that instead of Microsoft’s testing framework. I should probably mention at this point that we are doing mainly C# development. I also want to have a solution that doesn’t require re-engineering our codebase. That is a challenge as well as most mock object frameworks really only work well with dependency injection everywhere. And we don’t do dependency injection. So, we’ll see what shakes out.

  • Why I decided not to use ORM

    Posted on August 8th, 2013 Finster No comments

    My opinion is that it’s good to reexamine your approach early and often. I currently work on a service platform that is part of a larger service-oriented approach. We have one source of data, and want to push that to many different clients.

    I didn’t design the architecture or database schema, but it was fairly straightforward. We have a pretty flat object model. We do have some associations that could be implemented, but when dealing with Widget and SubWidgets we expose a lot of these relations through views.

    Now, switching to an ORM, like NHibernate, kind of presupposes that you have a certain amount of hierarchy in your data model. Things belong to other things and whatnot. Unfortunately, our data model isn’t really arranged like that. But honestly that’s not a big deal. No sacred cows, here. If going to an ORM is really the right option, then refactoring and rebuilding our data model should be on the table. Since we’ve done a good job of keeping concerns separated when it comes to database access, it wouldn’t be that complicated as we wouldn’t have to change much at the service layer. We would just be changing what happens at the layers below it.

    That’s all well and good, but there are some things I like about what we have right now. First of all, it’s fast. Really fast. Yeah, I know, you have to be careful for premature optimization, but in this case, where we need to be able to supply service calls to iOS, Android, Web, and another platform that has strict demands about service request call times and page load times, getting things out of the database should not be a bottleneck. On top of that, since we have a flat hierarchy with SQL views exposing related data in a single dataset, we know that at any given entry point, we are getting the right data and only the right data.

    With most ORM, and NHibernate is no exception, there is always the issue of lazy loading associated data sets. There is a certain amount of overhead in fine-tuning the data loading so that you don’t accidentally pull down an entire table’s worth of data when all you wanted were a few associated rows. The converse can also happen, where you want to get two or three layers of associated records but the lazy loading kicked in and you’re not getting it fast enough.

    NHibernate (unlike some PHP ORM) actually seems to handle all of this really well, with a lot of intuitive configuration and conventions to help make things performant when you need it. But it’s all this overhead. Right now, when I set up a new data contract, I can set it and forget it. With ORM, my experience has been there’s always this game of fine-tuning things. Otherwise, your code ends up generating ten or a hundred times more SQL queries than are really needed.

    One of the major things I’m looking for with this, besides performance, is maintainability. Our current system has a sort of home-grown migration process. It works for now, but requires a fairly high level of SQL knowledge to implement. That puts my platform developers in the position of almost having to be a DBA to make a change to a data contract. This is certainly not an ideal situation. Fluent NHibernate would help with that immensely. We could store our mappings as C# classes, and let version control handle the revisioning. Rolling back a database change becomes somewhat problematic unless we handle any rollback as just another change set. Unfortunately, it doesn’t really help with the process of migrating data.

    For these reasons, I’ll probably sit on what we have until I find something that not only abstracts the migration process, but also doesn’t require a lot of overhead in terms of performance tuning and maintenance.

  • Using Fakes Framework to Streamline Unit Testing Woes

    Posted on July 11th, 2013 Finster No comments

    I love having unit tests. However, sometimes there are… obstacles to generating the unit tests I want. Like any good unit tester, I want to be able to set up mock objects to help isolate the code I’m testing. Unfortunately most of the decent mock object frameworks rely heavily on dependency injection to get your mock object into the code you want to test.

    Dependency injection is fine, but I hate refactoring a bunch of old code (that is stable, clean, and maintainable) just for the purpose of getting a unit test working. Well, that’s where Microsoft’s new Fakes Framework comes in.

    The Fakes Framework provides two basic tools. Stubs are mock objects that work like most other mock objects you’ve dealt with. You give it an interface, it gives you a mock object that you can initialize for your unit test. Nothing too surprising there. However, the really interesting feature of the Fakes Framework comes in the form of Shims.

    Shims allow you to circumvent any .NET method so that it returns what YOU tell it to. The classic example provided by Microsoft is the DateTime.Now property. Typically, this returns the current date and time. We all know that. However, I can use Shims to force it to return an arbitrary date and time, like Jan 1, 2000, for example.

    1
    2
    3
    4
    5
    6
    7
    
    // create a ShimsContext; cleans up shims 
    using (ShimsContext.Create()
        // hook delegate to the shim method to redirect DateTime.Now
        // to return January 1st of 2000
        ShimDateTime.NowGet = () => new DateTime(2000, 1, 1);
        Y2KChecker.Check();
    }

    Now, anytime Y2KChecker.Check() calls DateTime.Now it will receive Jan 1, 2000 instead of whatever the date is now. The implications for testing any kind of time-sensitive code is pretty clear. But how does this help with regular mocking? Couldn’t you just Shim everything and then be good to go?

    Well, yes, I suppose you could, but the power of a good mock object framework is in reducing the amount of code you have to write, and helping you to not shoot yourself in the foot. Shims are flexible and powerful, but I’d still rather use a mock object framework that will provide useful features like verification.

    But let’s say you have some Legacy CodeTM that relies on some kind of data access objects that you want to abstract out of your current unit test. After all, you don’t want to test the database access, you just want to test your code to make sure it’s processing the data correctly. The catch is that you aren’t using dependency injection so there isn’t really a clean way to get your mocked object into your code… oh but there is. Shims!

     

  • Starcraft Course Lectures Available Online

    Posted on February 13th, 2009 Finster No comments

    starcraft-hell

    In my line of work (web development) every now and then you get the chance to work on a project that stands above and beyond other endeavors. Recently, I’ve had the chance to help develop academicearth.org.

    Academic Earth is kind of a “hulu” for academia. They’ve been gathering OCW (Open Course Ware) videos from all over the web from such places as Yale, Stanford, MIT, etc. One of the institutions that they’ve gleaned some content from is the new Starcraft Studies course at UC Berkeley.

    Having watched this first lecture, it’s obvious that the game of Starcraft has continued to evolve past the doldrums of the dismal “Big Game Hunters” matches that drove me from the game years ago. I was most intrigued by the descriptions of some of the South Korean pros who have been playing a more defensive game, again something that was unheard of in serious play here in the states, many years ago.

    I look forward to seeing more of these lectures and perhaps gaining a deeper understanding of the RTS genre as a whole.

    I’m sure Academic Earth will be updating as the course continues, but while you’re over there check out some of the other lectures.

  • How to set the session path for CakePHP sessions

    Posted on May 15th, 2008 Finster 3 comments

    I had an issue where my CakePHP app (which was part of a larger webapp) was setting its own cookie_path for ‘/cakeapp’, which was the location of Cake. Meanwhile, the session for the general app, with cookie_path at ‘/’ was also setting up its own session. So, I needed Cake to use ‘/’ for its session.cookie_path instead of defaulting to the Cake app’s path.

    The simplest way to do this for me was to set Session.start to false in core.php. Then, I added a $this->Session->activate(‘/’); to my app_controller.php in a beforeFilter() function. Now, the Cake app is using ‘/’ for it’s session.cookie_path.

    h/t AD7six in #cakephp for showing me this.

  • Minnebar Progress, so far

    Posted on May 10th, 2008 Finster No comments

    I gave my presentation at 9am this morning. It seemed to go pretty well. There were probably about a dozen people there. Felt like I did a good job of introducing CakePHP. 

    After, listened to a presentation on using memcached with MySQL. Very interesting. I’ve never really even thought about memcached before, and I really learned a lot about it.

    Then, listened to a presentation on LAMP and how it’s good for a lot of things, and how virtualization can allow you to run any stack you want on top of LAMP. Compared it to the history of the screw. The presenter seemed like a Ruby guy, but it was still very interesting and free of evangelism. 😉

    Now getting ready to listen to a panel on state of tech in MN with a bunch of tech guys, including a Microsoft rep.

  • MinneBar, HO!

    Posted on May 6th, 2008 Finster 1 comment

    Just put my name down for MinneBar.

    Since, this is my first time attending MinneBar (or any barcamp, for that matter) I’ll be putting together a presentation on building a simple CMS for CakePHP.

    I’m kinda nervous, but it should be a lot of fun.