PowerShell is my go-to terminal. I use it a lot, mainly for git commands, but for things like mongo, js tooling (gulp etc), the dotnet cli, among others.
Like similar tools, PowerShell allows you to create a custom profile that gets loaded each time you start a new session. This profile can be utilised for creating shortcuts, aliases and functions that make everyday tasks and commands quicker and easier.
By default, each time PowerShell is launched, it will try and load a .ps1 file called Microsoft.PowerShell_profile.ps1 from %USERDIR%\Documents\WindowsPowerShell\. You can see the expected profile path by typing $PROFILE…
Unfortunately, this file does not exist by default, so you will have to create it. The easiest way to do that is…
I’m sure there are many ways to do this, but the way that works for me is using OneDrive. I have my personal profile stored in my OneDrive so that it is synced to each computer that I use. Then I just create the file explained in the steps above and simply ‘dot-source’ my actual profile…
Harness is a free and open source .NET library designed to manage the state of a Mongo database during testing.
At its core, it endeavours to aid unit and integration testing by putting one or more Mongo databases in a known state at the beginning of a test, or suite of tests. The framework allows you to control your code’s dependency on a Mongo database, therefore increasing test consistency and repeat-ability.
Why do I need this?
If you want to perform integration tests on your Mongo repositories, it would be really helpful to have a known set of data in your target database so that you can write accurate and consistent tests. Harness allows you to do exactly that!
How do I get it?
The easiest way to include it in a project is via nuget, and can be installed with…
What’s the simplest way to get it working?
Do you like config files or fluent configuration in code? Either way, Harness has you covered. You can find examples of both here, or see below for a simple config file example…
Once you have added Harness to you project, one way to get up and running is with 2 simple json files…
The settings file must have a .json extension. It is a json object that contains a databases property that is an array of database objects.
I recently came across this xkcd comic and absolutely loved it. It made me laugh because I could see myself, and so many others, in it.
It’s funny because it’s true, and it got me thinking… this is something I’ve seen, not only during development, but through all stages of a project. It can be trivial, but it can also be something that slows a project, or even halts it completely.
The first thing that springs to mind, that I’ve seen and been guilty of many times, is something like the following example…
Supreme Leader General Base Class
“Yo! We need to write something to validate that a product has a price. That should be pretty easy right? “Yeah, but we should make it validate other values too, in case we need that later on.” “Oh yeah, and not just products either.” “Of course! We’ll definitely need to validate all the other things we haven’t written yet.” “It’ll save time in the long run!”
Before you know it, you have some GenericValidatorRepositoryBaseFactoryResolver<T> and loads of supporting classes and interfaces, when all you probably needed was one simple function.
It’s an exaggeration to make the point, and I’m not saying that abstraction is a bad thing, far from it! But I think it is really important to find a balance that keeps emphasis on the needs of the customer.
For instance, in our example, it might be that we never end up needing to validate any other type of object. We have delivered the requirement, but at what cost? Costs we might not necessarily think about. It probably took longer than the simple implementation, there will be an extra maintenance overhead due to the more complex code, and increased number of tests that now have to be kept up to date etc.
As a good friend once told me, “It may be the nicest code you’ve ever seen, but if it never gets shipped it’s useless!”
Of course, this isn’t an excuse to hack out any old code as quickly as possible. We still have responsibilities as developers to adhere to good coding practices, otherwise we end up going too far the other way, and actually getting slower in the long run.
The challenge is working out how to meet and deliver the requirement as efficiently as you can, but in a way that can easily be extended in the future - to borrow from Extreme Programming…
“Do the simplest thing that could possibly work.”
On a side note, there is a really great video from MPJ on his channel funfunfunction about the growth stages of a programmer that is relevant here, and well worth a watch!
Minimum Viable Product
Or, in other words…
Deliver value quickly!
Techopedia explains a minimum viable product (MVP) as…
… the most pared down version of a product that can still be released. An MVP has three key characteristics:
It has enough value that people are willing to use it or buy it initially
It demonstrates enough future benefit to retain early adopters
It provides a feedback loop to guide future development
The idea with creating an MVP is not only to speed up the learning process, but to also put emphasis on the needs of the customer.
As I said, we don’t just see this issue in development, and can apply the same thinking to requirements gathering and design. It’s so easy to get carried away when designing new features, and sometimes lose sight of the actual requirement and immediate customer need.
Extending the original example… “We need a new feature to dispense salt…” “Ok, no problem, that shouldn’t take too long. But what if it did all condiments?” “Yeah, great idea, but what about people with allergies?” “Ooh good one! Let’s make sure it can do track allergy information. Don’t worry, it’ll save time in the long run!”
Now your poor client, whose immediate need is just to be able to dispense salt, has to wait months instead of weeks for something they might never need - or want!
So, what do we do about it?
This is something that can so easily crop up in many parts of a project’s lifecycle, and there is not a simple ‘just do this’ solution. However, I think this is a very important and apt analogy to have in our minds every time we start something new.
As a software developer, or member of a project team, your goal should be to deliver value throughout the process. Start small and build iteratively!
So, every time we sit down to write a piece of code, or design a new feature or a new set of requirements, simply ask yourself…
There are a number of ways to create a new Aurelia app (the CLI, the skeleton projects etc), and plenty of demos on how to do it. Having been through this process, I was keen to create my own skeleton application, specifically using typescript and jspm/systemjs, to help embed the things I’d learned. Here’s what I did…
Before getting started, make sure you have node and npm installed, and the following packages installed globally…
At the time of writing (to the best of my knowledge), there is an issue with the typescript typings resolution for libraries that provide their own types (rather than through npm install @types/..) that you install via jspm. Aurelia is one such library - the typescript compiler cannot find the Aurelia type definitions if you install aurelia with jspm. This stack overfow question has some more information.
To get round this, we can also install the aurelia libraries as development dependencies with npm…
We will serve files from a directory called dist. This folder will contain the result of building the code we have written in the src folder. To do this, we wil use gulp.
Create the tsconfig.json file in the project root and add the following…