Last week my twitter feed exploded with lots of entries about Microsoft //Build 2016 conference.
As it’s one of the most important events for .NET dev community MSFT prepared quite a few awesome announcements for us:
By default VS15 is using C# 6, so we need to add conditional compilation symbols to our project:
To do that you should go to Properties > Build > Conditional compilation symbols
Once we’re done with that VS15 will pick up changes automatically.
As of today C# 7 goes with several features:
Ref returns and locals
Binary literals and Digit separators
These are very minor features, you know, nothing to write home about:
In addition to existingint literals such as hex we can use binary ones.
Simple and works as expected.
Same about digit separators. Similar feature exists in Java since version 7.
Local functions can be defined in a scope of a method.
This is something you would do when you need a small helper like this one:
This could be rewritten in the following way now:
As you can see local functions support expression bodies, they also can be async.
Local functions can capture variables as lambdas do.
Also might be handy for iterators:
Ref returns and locals
Sort of a low-level feature in my opinion. You can return reference from method.
Erric Lippert thought that
we believe that the feature does not have broad enough appeal or compelling usage cases to make it into a real supported mainstream language feature.
Not anymore, he-he.
Patterns are used in the is operator and in a switch-statement to express the shape of data against which incoming data is to be compared. Patterns may be recursive so that subparts of the data may be matched against subpatterns.
This is huge. C# community has been waiting for it for a long time.
Unfortunately the syntax is not final now.
There are several types of patterns supported for now.
The type pattern is useful for performing runtime type tests of reference types.
A constant pattern tests the runtime value of an expression against a constant value.
A match to a var pattern always succeeds. At runtime the value of expression bounds to a newly introduced local variable.
Every expression matches the wildcard pattern.
switch based patterns could contain so-called guard close:
Patterns could be joined:
Pattern matching is really neat. I spent some time with it and I like it.
As an example I’m going to use my pet project: AsyncSuffix plugin for ReSharper. The reason is that the way you pack and publish R# extensions is slightly different from the regular NuGet package. I actually faced some limitations of NuGet.exe and AppVeyor.
Git and GitHub both are kind of industry standard for open source software development nowadays.
I’m using slightly modified version of git flow for my project. I have two mandatory branches master and develop.
Master branch contains released code marked with tags. Develop branch is for stable pre-release code.
These two branches are configured as protected, so code can never be merged unless CI server reports the successful build. This allows me to publish stable and pre release packages automatically.
The development is taking place in feature branches. The build triggered from the feature branch will create an alpha package and publish it to separate NuGet feed provided by AppVeyor.
I like the approach suggested by GitVersion. You can define package version based on your branching model.
The basic idea is simple:
The build triggered by commit to Feature branch produces alpha package, beta packages come from develop branch, release candidates come from master.
AppVeyor is a free cloud build server which is easy to integrate with your GitHub repository.
Creating the account is simple, you can log in using your GitHub and you’re done.
There are two ways to configure build for the project.
* Using UI
* By placing AppVeyor.yml file to the root of the repository.
First option is good for testing, but committing configuration file to the repository gives you ability to track versions.
As I already mentioned, I work in a feature branch and I want to make sure that build is not broken.
That means that build server constantly compiles and packs code.
However code in a feature branch is most likely unstable and it’s better not to publish it to the official feed.
On the other hand when the feature is complete, tested, and merged to develop branch I’m more than happy to publish prerelease package.
I’m dogfooding anyway.
External pull requests are different story. The build process must be triggered (how else can I be sure that it’s safe to accept it?). But I don’t want to have any packages created.
Let’s summarise it:
Build process is triggered by any commit, merge or tag action
Build process depends on the branch name
First of all we need to agree on the branch naming. Typically it depends on the branching model you use.
For GitFlow I’m using the following naming convention:
master and develop - for stable code
feature/* - for unfinished features
My AppVeyor.yml will look like:
Unfortunately there is no way to have common sections in the config file (well, at least at the moment),
so we have no other option but having very similar configurations. However AppVeyor evolves very quickly and we can expect some improvements.
Once we have templates defined we can start with actual build steps.
If APPVEYOR_PULL_REQUEST_NUMBER environment variable is defined that means that we’re currently performing a synthetic build of merge commit from Pull Request.
The ApiKey will be transparently decrypted and the package will be published:
P.S. You can definitely do more than publishing packages. You can execute tests, custom build scripts (PowerShell, FAKE) or even deploy your application.
Look at all those famous people committing to some random developer’s boring repository.
Why would they do that? In fact, they don’t.
In general, git is just a tool that allows you to create patches and distribute them around by email.
When you create a commit, it will be signed with your name and email. Look at the author part here:
You have your name listed twice for every commit. You are both the author and the committer. Technically, the author is the one who created the patch, and the committer is the person who applied the patch.
By default, both values are taken from your gitconfig
However, there is a way to override them:
This command will write any name and email to both, the committer and the author fields.
Why would you do that?
The real life use case scenario: imagine you have to work with a repository on someone else’s computer. Or you work from you personal computer, but you want to commit your changes to your work repository. You might want to use the proper user name to keep the history clean and correct.
First of all, this is not a security issue. You don’t actually gain access to the person’s git account and repositories.
The only thing that GitHub is doing here is linking the fake commit to their account based on their e-mail address. This activity will not be shown on their profile page.
It’s very important to collect and track as much information as you can about your system. We have logging, monitoring, reports and analytics. All the systems that we build are not just packages, which are deployed to the server/computer or device. Everything starts with Issue Tracking system and through the code goes to production. The code and the process of coding both look like an important part of the system and it makes a lot of sense to collect and store all the data about code.
The actual process of programming is tracked by VCS. The majority of teams can easily tell you where did this line of code come from. Author, branch name, and often the issue identifier are stored in association with the commit. If you don’t modify the history then you’ll be able to track down the way developer was building the feature.
GitHub provides you with extra level of storing information about your code. It’s definitely not just the web UI for your repository. The Pull Request collects all the commits related to the feature, it tracks the discussion and as long as it’s a part of your history you can not delete the PR. That makes it a great place to aggregate other events.
Statuses are most often sent by CI server. However CI is not the only possible source of this information. Anyway statuses are linked to the commit and you can use them in your process. For example with the combination of protected branches and statuses you can prevent you from merging code, that can not be build, into master branch.
The other feature of GitHub, which I discovered not so long ago is Deployment Statuses. It’s quite simple at the moment. The only thing it does it links the branch/commit or tag with the event of deployment and sends notifications around.
Successful deployment on the test environment is just another confirmation for your teammates that your changes are fine and it’s time to merge your code.
That’s how it looks like. Well, except the fact that I don’t usually talk to myself.
Requesting the new deployment using Octokit.net is relatively simple:
Updating the status is not complicated as well:
As I mentioned earlier the deployment can be requested for the particular commit, for the branch or for the tag.
I use different accounts and different computers to work with GitHub repositories, so sometimes I face the situation when I don’t have my SSH key generated for the current environment.
I can still work with my command line tool, however I have to type credentials every time I want to pull or push to the remote.
Actually I’m fine with typing the password, but not the user name. So what can I do (besides generating new SSH key and adding it to my Git/GitHub account) is to update the remote to have my user name in it.
First of all let’s check what is the value of origin url.
We’ll get something like that:
Now we can update the origin url.
Your user name, repository owner and repository name will be different.