C# 7 features preview

Last week my twitter feed exploded with lots of entries about Microsoft //Build 2016 conference. As it’s one of the most important events for .NET dev community MSFT prepared quite a few awesome announcements for us:

Since I got a bit sick this weekend I had plenty of time to play with new VS15 and C# 7.

Getting started

Let’s grab a VS “15” first. There is a new installer by the way.

Enabling experimental features

By default VS15 is using C# 6, so we need to add conditional compilation symbols to our project: __DEMO__.

To do that you should go to Properties > Build > Conditional compilation symbols conditional compilation symbols dialog

Once we’re done with that VS15 will pick up changes automatically.


As of today C# 7 goes with several features:

  • Binary literals
  • Digit separators
  • Local functions
  • Ref returns and locals
  • Pattern matching

Binary literals and Digit separators

These are very minor features, you know, nothing to write home about: In addition to existing int literals such as hex we can use binary ones.

class LiteralsDemo
  public void BinaryLiterals()
    var numbers = new[] { 0b0, 0b1, 0b10 };
    foreach (var number in numbers)

Simple and works as expected.

binary literals output

Same about digit separators. Similar feature exists in Java since version 7.

public void DigitSeparators()
    var amount = 1_000;
    var thatIsALot = 1_000_000;
    var iAmHex = 0x00_1A0;
    var binary = 0b1_000;

Local functions

Local functions can be defined in a scope of a method.

This is something you would do when you need a small helper like this one:

public void RegularMethod2()
  Func<int, bool> even = (number) => number % 2 == 0;
  foreach (var number in Enumerable.Range(0, 10).Where(even))

This could be rewritten in the following way now:

class LocalFunctoins
    public void RegularMethod()
       bool Even(int number) => number % 2 == 0;

       foreach(var number in Enumerable.Range(0,10).Where(Even))

As you can see local functions support expression bodies, they also can be async.

Local functions can capture variables as lambdas do.

public void Foo(int z)
    void Init()
        Boo = z;

Also might be handy for iterators:

 int[] GetFoos()
     IEnumerable<int> result() // iterator local function
         yield return 1;
         yield return 2;
     return result().ToArray();

Ref returns and locals

Sort of a low-level feature in my opinion. You can return reference from method. Erric Lippert thought that

we believe that the feature does not have broad enough appeal or compelling usage cases to make it into a real supported mainstream language feature.

Not anymore, he-he.

 static void Main()
     var arr = new[] { 1, 2, 3, 4 };
     ref int Get(int[] array, int index)=> ref array[index]; 
     ref int item = ref Get(arr, 1);
     item = 10;

Will print:


Pattern matching

Patterns are used in the is operator and in a switch-statement to express the shape of data against which incoming data is to be compared. Patterns may be recursive so that subparts of the data may be matched against subpatterns.

This is huge. C# community has been waiting for it for a long time. Unfortunately the syntax is not final now.

There are several types of patterns supported for now.

Type pattern

The type pattern is useful for performing runtime type tests of reference types.

public void Foo(object item)
    if (item is string s)

Constant Pattern

A constant pattern tests the runtime value of an expression against a constant value.

public void Foo(object item)
    switch (item)
        case 10:
            WriteLine("It's ten");
            WriteLine("It's something else");

Var Pattern

A match to a var pattern always succeeds. At runtime the value of expression bounds to a newly introduced local variable.

 public void Foo(object item)
     if(item is var x)
         WriteLine(item == x); // prints true

Wildcard Pattern

Every expression matches the wildcard pattern.

 public void Foo(object item)
     if(item is *)
         WriteLine("Hi there"); //will be executed

Recursive Pattern

public int Sum(LinkedListNode<int> root)
    switch (root)
        case null: return 0;
        case LinkedListNode<int> { Value is var head, Next is var tail }:
            return head + Sum(tail);
        case *: return 0;


switch based patterns could contain so-called guard close:

public void Foo(object item)
        case int i when i > 10:
            WriteLine("That's a good amount");
        case int i:
            WriteLine("That's fine");

Patterns could be joined:

public void Foo(object item)
    if(item is string i && i.Length is int l)
        WriteLine(l > 10);


Pattern matching is really neat. I spent some time with it and I like it.

console installer output

Fully automated Continuous Integration for your Open Source library for free

open source is commumism This is a long title. Well, the post is going to be long as well.

I want to show how you can set up the CI pipeline using free services and tools.

As an example I’m going to use my pet project: AsyncSuffix plugin for ReSharper. The reason is that the way you pack and publish R# extensions is slightly different from the regular NuGet package. I actually faced some limitations of NuGet.exe and AppVeyor.


Git and GitHub both are kind of industry standard for open source software development nowadays.

I’m using slightly modified version of git flow for my project. I have two mandatory branches master and develop. Master branch contains released code marked with tags. Develop branch is for stable pre-release code. These two branches are configured as protected, so code can never be merged unless CI server reports the successful build. This allows me to publish stable and pre release packages automatically.

The development is taking place in feature branches. The build triggered from the feature branch will create an alpha package and publish it to separate NuGet feed provided by AppVeyor.


I like the approach suggested by GitVersion. You can define package version based on your branching model.

The basic idea is simple:

  • The build triggered by commit to Feature branch produces alpha package, beta packages come from develop branch, release candidates come from master.
  • The tag produces stable version.

If you want to get more backgrounds I recommend you a nice couple of posts written by my colleague.


I’m using AppVeyor as a CI server.

AppVeyor is a free cloud build server which is easy to integrate with your GitHub repository. Creating the account is simple, you can log in using your GitHub and you’re done.

There are two ways to configure build for the project. * Using UI * By placing AppVeyor.yml file to the root of the repository.

First option is good for testing, but committing configuration file to the repository gives you ability to track versions.

The goal

As I already mentioned, I work in a feature branch and I want to make sure that build is not broken. That means that build server constantly compiles and packs code. However code in a feature branch is most likely unstable and it’s better not to publish it to the official feed.

On the other hand when the feature is complete, tested, and merged to develop branch I’m more than happy to publish prerelease package. I’m dogfooding anyway.

External pull requests are different story. The build process must be triggered (how else can I be sure that it’s safe to accept it?). But I don’t want to have any packages created.

Let’s summarise it:

  • Build process is triggered by any commit, merge or tag action
  • Build process depends on the branch name


First of all we need to agree on the branch naming. Typically it depends on the branching model you use. For GitFlow I’m using the following naming convention:

  • master and develop - for stable code
  • feature/* - for unfinished features

My AppVeyor.yml will look like:

Unfortunately there is no way to have common sections in the config file (well, at least at the moment), so we have no other option but having very similar configurations. However AppVeyor evolves very quickly and we can expect some improvements.

Once we have templates defined we can start with actual build steps.

Install GitVersion from Chocolatey

    - choco install gitversion.portable -y

Define environment variables (different for each branch configuration)

      secure: RjiHK3Oxp74LUrI1/vmc2S36zOSRLxFM1Eq0Qn4hixWiou11jFqUbW2ukMNXrazP

Important note: the api key is encrypted and can be published to the public repository.

Restore NuGet packages and set the build version.

    - ps: nuget restore
    - ps: gitversion /l console /output buildserver /updateassemblyinfo /b (get-item env:APPVEYOR_REPO_BRANCH).Value

The build itself:

    project: AsyncSuffix.sln

Now it’s time to create NuGet package. AppVeyor has a feature to create packages automatically:


Doesn’t work for me though. At the moment it ignores the content of .nuspec file, and performs nuget pack command on the csproj file.

That’s why I have to define an after_build step:

    - ps: nuget pack AsyncSuffix/AsyncSuffix.nuspec -Version (get-item env:GitVersion_InformationalVersion).Value

I have to specify the version, because the nuget pack command reads package version from assembly annotations. Unfortunately I have this annotation:

As you can guess RegisterConfigurableSeverityAnnotation is declared in one of R# SDK’s assemblies. NuGet fails to load it and falls back to 1.0.0 version.

The final step is different for every branch configuration:

- ps: if(-not $env:APPVEYOR_PULL_REQUEST_NUMBER){ 
        nuget push *.nupkg 
            -ApiKey (get-item env:resharper_nuget_api_key).Value 
            -Source https://resharper-plugins.jetbrains.com 

If APPVEYOR_PULL_REQUEST_NUMBER environment variable is defined that means that we’re currently performing a synthetic build of merge commit from Pull Request. The ApiKey will be transparently decrypted and the package will be published:


P.S. You can definitely do more than publishing packages. You can execute tests, custom build scripts (PowerShell, FAKE) or even deploy your application.

How to convince Linus Torvalds to contribute to your project

Look at all those famous people committing to some random developer’s boring repository.


Why would they do that? In fact, they don’t.

In general, git is just a tool that allows you to create patches and distribute them around by email.

When you create a commit, it will be signed with your name and email. Look at the author part here:


You have your name listed twice for every commit. You are both the author and the committer. Technically, the author is the one who created the patch, and the committer is the person who applied the patch.

By default, both values are taken from your git config

However, there is a way to override them:

This command will write any name and email to both, the committer and the author fields.

Why would you do that?

The real life use case scenario: imagine you have to work with a repository on someone else’s computer. Or you work from you personal computer, but you want to commit your changes to your work repository. You might want to use the proper user name to keep the history clean and correct.


First of all, this is not a security issue. You don’t actually gain access to the person’s git account and repositories. The only thing that GitHub is doing here is linking the fake commit to their account based on their e-mail address. This activity will not be shown on their profile page.

So use it wisely and do not abuse it too much :)

GitHub Deployment statuses

It’s very important to collect and track as much information as you can about your system. We have logging, monitoring, reports and analytics. All the systems that we build are not just packages, which are deployed to the server/computer or device. Everything starts with Issue Tracking system and through the code goes to production. The code and the process of coding both look like an important part of the system and it makes a lot of sense to collect and store all the data about code.

The actual process of programming is tracked by VCS. The majority of teams can easily tell you where did this line of code come from. Author, branch name, and often the issue identifier are stored in association with the commit. If you don’t modify the history then you’ll be able to track down the way developer was building the feature.

GitHub provides you with extra level of storing information about your code. It’s definitely not just the web UI for your repository. The Pull Request collects all the commits related to the feature, it tracks the discussion and as long as it’s a part of your history you can not delete the PR. That makes it a great place to aggregate other events.

The feature, which is well known is the GitHub Statuses API

Statuses in action

Statuses are most often sent by CI server. However CI is not the only possible source of this information. Anyway statuses are linked to the commit and you can use them in your process. For example with the combination of protected branches and statuses you can prevent you from merging code, that can not be build, into master branch.

The other feature of GitHub, which I discovered not so long ago is Deployment Statuses. It’s quite simple at the moment. The only thing it does it links the branch/commit or tag with the event of deployment and sends notifications around.

Deployment created

Successful deployment on the test environment is just another confirmation for your teammates that your changes are fine and it’s time to merge your code.

Deployments in action

That’s how it looks like. Well, except the fact that I don’t usually talk to myself.

Requesting the new deployment using Octokit.net is relatively simple:

Updating the status is not complicated as well:

As I mentioned earlier the deployment can be requested for the particular commit, for the branch or for the tag.


Avoid typing user name when committing to GitHub repository

I use different accounts and different computers to work with GitHub repositories, so sometimes I face the situation when I don’t have my SSH key generated for the current environment.

I can still work with my command line tool, however I have to type credentials every time I want to pull or push to the remote.

credentials required

Actually I’m fine with typing the password, but not the user name. So what can I do (besides generating new SSH key and adding it to my Git/GitHub account) is to update the remote to have my user name in it.

First of all let’s check what is the value of origin url.

git config remote.origin.url

We’ll get something like that:


Now we can update the origin url.

git config remote.origin.url https://USER@github.com/OWNER/repo.git

Your user name, repository owner and repository name will be different.

And the user name is not needed any more.

credentials not required