Dev environment 2016. Windows.

tool

I’ve changed a job last month and had to build up my dev environment from scratch again. While doing that I decided to write down some thoughts about it.

I guess it might be interesting to look back at some point and see how does it evolve.

Background

At my previous employer, we were very into Virtual Machines. We had different base VMs which every developer can download.

That’s extremely handy when a new hire has nothing to do, but to install just a couple of tools that are not standard, and enter some credentials.

You remember those dialogs:

— I can’t build the project. Packages are missing…

— Oh, yep, you need to add this private NuGet feed.

— I can’t run the project locally.

— Yeah, I think you need so put these lines into your hosts file.

Well, now it’s all gone.

It’s not only limited to this scenario. You want to experiment with a new unstable version of the framework (yes, .NET Core RC-final-almost-stable, I’m talking about you) and you don’t want to mess up with your dev machine?

Just fire a new VM up.

Got a neat idea for a hackathon, but you think that JDK is not what you need on your computer? Giving a tech demo on the local meetup?

A VM comes to the rescue.

Got a new computer? Just copy the VM over and you’re up to speed in 20 minutes.

Back to the topic

So, what do I have on my base VM?

Frameworks

I’m a .NET web developer, so nothing special here:

  • .NET framework 4.5.2 and 4.6
  • Node.js (npm, gulp)

IDE and editors

vscode

File Managers and command line shell

choco

Source control

GitKraken

GitKraken is quite heavy and not super fast as most of the electron.js based tools are, but I find it’s history tree view very readable. The merge tool is not bad at all.

I do most of the git related operations in git bash, though.

Debugging and profiling

linqpad

  • DotPeek. A free .NET decompiler.
  • DotMemory .NET memory profiler.
  • WinDbg.
  • Fiddler. A free web debugging proxy.
  • REST and Http clients. I use two, can’t decide which one I prefer over an other.
  • LINQPad A .NET programmer’s playground.

Communication

  • Slack
  • Skype

Other tools

Web tools and services

Besides all the tools above which I have installed locally there are web services I use on a pretty much daily basis.

  • requestb.in. An easy to use HTTP request inspector.
  • AppVeyor. A free CI/CD service for my open source projects.
  • regex101.com. A super awesome regular expressions builder and debugger.
  • Toggl. A time tracker.

Batch install

Most of the tools could be installed from Chocolatey gallery.

choco install dotnet4.5.2 linqpad -y

I prefer to have all the tools grouped into .config files:

<!-- commandline-tools.config -->
<?xml version="1.0" encoding="utf-8"?>
<packages>
  <package id="git" />
  <package id="far" />
  <package id="nuget.commandline" />
  <package id="conemu" />
</packages>

and they could be installed all together.

choco-install

A call to action

Are there any tools around which are worth to try?

Please share in comments. I’m always keen on trying new things.

Thoughts on C# 7 Local Functions

Frankly when I first saw that C# 7 will come with new local functions I thought that that’s just a nice and a compact way of defining local helpers. In fact, it’s much more interesting and useful feature. Today I’m going to talk about it in more details.

Let’s start with a brief overview of the current situation.

Current options

Private methods

The first option that existed in C# 1 is having a private method.

class Bar
{
    public void Foo(int[] numbers)
    {
        foreach(var i in numbers)
        {
            Console.WriteLine(AsPrintable(i));
        }
    }
    
    private string AsPrintable(int i) =>  $"I have {i} here";
}

That’s a clean and simple solution. It has few issues, though.

AsPrintable might have no sense outside of Foo method, but it’s accessible for every other method inside the class. It will be taken into account by IntelliSense.

Private method discoverable via IntelliSense

Func and Action

We can try to hide our helper inside the scope of Foo method by converting it to Func<int, string>:

 class Bar
 {
     public void Foo(int[] numbers)
     {
         Func<int, string> asPrintable = i => $"I have {i} here"; 
         foreach (var i in numbers)
         {
             Console.WriteLine(asPrintable(i));
         }
     }
 }

Any disadvantages? Yep, a lot.

The call is unnecessary expensive: it will produce a couple of allocations.

There is no elegant way to define recursive lambda:

Lambda can not be recursive

We have to use an ugly trick:

Lambda can not be recursive

And the last, but not least is the fact that lambdas are very limited. You cannot use
out, ref,params and optional parameters. They cannot be generic.

There is a bright side, though, a lambda can capture variables from enclosing method aka closure.

Local functions

Local functions can be defined in the body of any method, constructor or property’s getter, and setter.

Since it will be transformed by compiler to regular private static method:

Local function decompiled

there will be no overhead of calling it and it can have all the properties which regular method declaration can have: it can be asynchronous, it can be generic, it can be dynamic.

Ok, there is a difference. Local functions can not be static. And local functions can capture variables from enclosing block:

class Bar
{
    public void Foo(int[] numbers)
    {
        var length = numbers.Length;
        string Length() => $"length is {length}";
        Length();
    }
}

Useful bits

As Bill Wagner wrote local function is a perfect solution for iterators and asynchronous methods when it comes to parameters validation.

The following code will throw an exception right away and not in a lazy manner.

public IEnumerable<T> AsEnumerable<T>(params T[] items)
{
    if (items == null) throw new ArgumentException(nameof(items));

    IEnumerable<T2> Enumerate<T2>(T2[] array)
    {
        foreach(var item in array)
        {
            yield return item;
        }
    }

    return Enumerate(items);
}

Other observations:

Local functions support Caller Info Attributes

 public static void SlimShady()
 {
     void Hi([CallerMemberName] string name = null)
     {
         Console.WriteLine($"Hi! My name is {name}");
     }

     Hi();
 }

Fun fact. You can declare a local function with the same name as an existing private method and local function will hide it. Same as for lambdas, though.

Local function has priority over private method

CatLight

It’s hard to imagine modern development without continuous intergation and unit-tests.

At work I hardly pay attention on the process, it just works: I push code to GitHub, later on TeamCity picks up changes and starts the build, and few minutes after I receive Slack or email notification about the result. However for my personal projects things are different. I’m using free plan on AppVeyor. It works pretty well except the fact that your build might stay in the queue for a while.

Last week I found a nice application called CatLight. That’s a tiny tool wich shows you build notifications in a tray. Ok, not that tiny as it’s built on electron and .NET :) Works for both Windows and OSX.

Setup is very straightforward: provide your AppVeyor API key

screenshot

Select all the projects you want to be notified about

screenshot

Select all the events you want to be aware of

screenshot

Trigger the build

screenshot

and wait for the green light.

screenshot

C# 7 features preview

Last week my twitter feed exploded with lots of entries about Microsoft //Build 2016 conference. As it’s one of the most important events for .NET dev community MSFT prepared quite a few awesome announcements for us:

Since I got a bit sick this weekend I had plenty of time to play with new VS15 and C# 7.

Getting started

Let’s grab a VS “15” first. There is a new installer by the way.

Enabling experimental features

By default VS15 is using C# 6, so we need to add conditional compilation symbols to our project: __DEMO__.

To do that you should go to Properties > Build > Conditional compilation symbols conditional compilation symbols dialog

Once we’re done with that VS15 will pick up changes automatically.

Features

As of today C# 7 goes with several features:

  • Binary literals
  • Digit separators
  • Local functions
  • Ref returns and locals
  • Pattern matching

Binary literals and Digit separators

These are very minor features, you know, nothing to write home about: In addition to existing int literals such as hex we can use binary ones.

class LiteralsDemo
{
  public void BinaryLiterals()
  {
    var numbers = new[] { 0b0, 0b1, 0b10 };
    foreach (var number in numbers)
    {
      Console.WriteLine(number);
    }
  }
}

Simple and works as expected.

binary literals output

Same about digit separators. Similar feature exists in Java since version 7.

public void DigitSeparators()
{
    var amount = 1_000;
    var thatIsALot = 1_000_000;
    var iAmHex = 0x00_1A0;
    var binary = 0b1_000;
}

Local functions

Local functions can be defined in a scope of a method.

This is something you would do when you need a small helper like this one:

public void RegularMethod2()
{
  Func<int, bool> even = (number) => number % 2 == 0;
  foreach (var number in Enumerable.Range(0, 10).Where(even))
  {
      Console.WriteLine(number);
  }
}

This could be rewritten in the following way now:

class LocalFunctoins
{
    public void RegularMethod()
    {
       bool Even(int number) => number % 2 == 0;

       foreach(var number in Enumerable.Range(0,10).Where(Even))
       {
          Console.WriteLine(number);
       }
     }
}

As you can see local functions support expression bodies, they also can be async.

Local functions can capture variables as lambdas do.

public void Foo(int z)
{
    void Init()
    {
        Boo = z;
    }
    Init();
}

Also might be handy for iterators:

 int[] GetFoos()
 {
     IEnumerable<int> result() // iterator local function
     {
         yield return 1;
         yield return 2;
     }
     return result().ToArray();
 }

Ref returns and locals

Sort of a low-level feature in my opinion. You can return reference from method. Erric Lippert thought that

we believe that the feature does not have broad enough appeal or compelling usage cases to make it into a real supported mainstream language feature.

Not anymore, he-he.

 static void Main()
 {
     var arr = new[] { 1, 2, 3, 4 };
     ref int Get(int[] array, int index)=> ref array[index]; 
     ref int item = ref Get(arr, 1);
     WriteLine(item);
     item = 10;
     WriteLine(arr[1]);
     ReadLine();
 }

Will print:

2
10

Pattern matching

Patterns are used in the is operator and in a switch-statement to express the shape of data against which incoming data is to be compared. Patterns may be recursive so that subparts of the data may be matched against subpatterns.

This is huge. C# community has been waiting for it for a long time. Unfortunately the syntax is not final now.

There are several types of patterns supported for now.

Type pattern

The type pattern is useful for performing runtime type tests of reference types.

public void Foo(object item)
{
    if (item is string s)
    {
        WriteLine(s.Length);
    }
}

Constant Pattern

A constant pattern tests the runtime value of an expression against a constant value.

public void Foo(object item)
{
    switch (item)
    {
        case 10:
            WriteLine("It's ten");
            break;
        default:
            WriteLine("It's something else");
            break;
    }
}

Var Pattern

A match to a var pattern always succeeds. At runtime the value of expression bounds to a newly introduced local variable.

 public void Foo(object item)
 {
     if(item is var x)
     {
         WriteLine(item == x); // prints true
     }
 }

Wildcard Pattern

Every expression matches the wildcard pattern.

 public void Foo(object item)
 {
     if(item is *)
     {
         WriteLine("Hi there"); //will be executed
     }
 }

Recursive Pattern

public int Sum(LinkedListNode<int> root)
{
    switch (root)
    {
        case null: return 0;
        case LinkedListNode<int> { Value is var head, Next is var tail }:
            return head + Sum(tail);
        case *: return 0;
    }
}

Others

switch based patterns could contain so-called guard close:

public void Foo(object item)
{
    switch(item)
    {
        case int i when i > 10:
            WriteLine("That's a good amount");
            break;
        case int i:
            WriteLine("That's fine");
            break;
        default:
            WriteLine("whatever");
            break;
    }      
}

Patterns could be joined:

public void Foo(object item)
{
    if(item is string i && i.Length is int l)
    {
        WriteLine(l > 10);
    } 
}

Conclusion

Pattern matching is really neat. I spent some time with it and I like it.

console installer output

Fully automated Continuous Integration for your Open Source library for free

open source is commumism This is a long title. Well, the post is going to be long as well.

I want to show how you can set up the CI pipeline using free services and tools.

As an example I’m going to use my pet project: AsyncSuffix plugin for ReSharper. The reason is that the way you pack and publish R# extensions is slightly different from the regular NuGet package. I actually faced some limitations of NuGet.exe and AppVeyor.

GitHub

Git and GitHub both are kind of industry standard for open source software development nowadays.

I’m using slightly modified version of git flow for my project. I have two mandatory branches master and develop. Master branch contains released code marked with tags. Develop branch is for stable pre-release code. These two branches are configured as protected, so code can never be merged unless CI server reports the successful build. This allows me to publish stable and pre release packages automatically.

The development is taking place in feature branches. The build triggered from the feature branch will create an alpha package and publish it to separate NuGet feed provided by AppVeyor.

GitVersion

I like the approach suggested by GitVersion. You can define package version based on your branching model.

The basic idea is simple:

  • The build triggered by commit to Feature branch produces alpha package, beta packages come from develop branch, release candidates come from master.
  • The tag produces stable version.

If you want to get more backgrounds I recommend you a nice couple of posts written by my colleague.

AppVeyor

I’m using AppVeyor as a CI server.

AppVeyor is a free cloud build server which is easy to integrate with your GitHub repository. Creating the account is simple, you can log in using your GitHub and you’re done.

There are two ways to configure build for the project. * Using UI * By placing AppVeyor.yml file to the root of the repository.

First option is good for testing, but committing configuration file to the repository gives you ability to track versions.

The goal

As I already mentioned, I work in a feature branch and I want to make sure that build is not broken. That means that build server constantly compiles and packs code. However code in a feature branch is most likely unstable and it’s better not to publish it to the official feed.

On the other hand when the feature is complete, tested, and merged to develop branch I’m more than happy to publish prerelease package. I’m dogfooding anyway.

External pull requests are different story. The build process must be triggered (how else can I be sure that it’s safe to accept it?). But I don’t want to have any packages created.

Let’s summarise it:

  • Build process is triggered by any commit, merge or tag action
  • Build process depends on the branch name

Configuration

First of all we need to agree on the branch naming. Typically it depends on the branching model you use. For GitFlow I’m using the following naming convention:

  • master and develop - for stable code
  • feature/* - for unfinished features

My AppVeyor.yml will look like:

Unfortunately there is no way to have common sections in the config file (well, at least at the moment), so we have no other option but having very similar configurations. However AppVeyor evolves very quickly and we can expect some improvements.

Once we have templates defined we can start with actual build steps.

Install GitVersion from Chocolatey

install:
    - choco install gitversion.portable -y

Define environment variables (different for each branch configuration)

environment:
    resharper_nuget_api_key:
      secure: RjiHK3Oxp74LUrI1/vmc2S36zOSRLxFM1Eq0Qn4hixWiou11jFqUbW2ukMNXrazP

Important note: the api key is encrypted and can be published to the public repository.

Restore NuGet packages and set the build version.

before_build:
    - ps: nuget restore
    - ps: gitversion /l console /output buildserver /updateassemblyinfo /b (get-item env:APPVEYOR_REPO_BRANCH).Value

The build itself:

build:
    project: AsyncSuffix.sln

Now it’s time to create NuGet package. AppVeyor has a feature to create packages automatically:

CreatePackage

Doesn’t work for me though. At the moment it ignores the content of .nuspec file, and performs nuget pack command on the csproj file.

That’s why I have to define an after_build step:

after_build:
    - ps: nuget pack AsyncSuffix/AsyncSuffix.nuspec -Version (get-item env:GitVersion_InformationalVersion).Value

I have to specify the version, because the nuget pack command reads package version from assembly annotations. Unfortunately I have this annotation:

As you can guess RegisterConfigurableSeverityAnnotation is declared in one of R# SDK’s assemblies. NuGet fails to load it and falls back to 1.0.0 version.

The final step is different for every branch configuration:

- ps: if(-not $env:APPVEYOR_PULL_REQUEST_NUMBER){ 
        nuget push *.nupkg 
            -ApiKey (get-item env:resharper_nuget_api_key).Value 
            -Source https://resharper-plugins.jetbrains.com 
      }

If APPVEYOR_PULL_REQUEST_NUMBER environment variable is defined that means that we’re currently performing a synthetic build of merge commit from Pull Request. The ApiKey will be transparently decrypted and the package will be published:

Success

P.S. You can definitely do more than publishing packages. You can execute tests, custom build scripts (PowerShell, FAKE) or even deploy your application.