Future Decoded & vNext

I attended the Future Decoded Tech Day at the Excel in London last week and I have to say it was a fantastic!

There was some key notes from some fantastic speakers in the morning and then technical tracks in the afternoon with some very big announcements made.

The technical track I followed was 'Modern Development with Visual Studio' comprised of 3 sections outlining some big announcements in the future of software development with visual studio.

The technical track was presented by @JonGalloway and Richard Lander in parallel to the Connect event in NYC hosted by @SHanselman.

They aimed to make the announcements at the same time so it was pretty cool to be there live for such a big event.

An event so big in fact, that the room wasn't big enough to hold everyone, my only criticism of the whole day. It was sad to see so many interested people being turned away from the room, luckily I managed to get there early enough for a seat.

The biggest announcements were around vNext and the Rosyln compiler. The biggest news was around Microsoft's venture into the open source world and how all of the vNext components are available on GitHub and are completely open source!

Amazing to see this progression of such a huge and notoriously closed company.

Some of the more notable aspects of the vNext developments were around NuGet and packaging, it's now possible to package each individual  application with it's own CLR and .Net Framework, these can be cloud optimised versions that have no significant impact on any other application running on your infrastructure, better yet, this opens developers up to a whole new world of freedom about what packages they can choose without the need to worry about system admin restrictions.

One of the other really cool features worth mentioning was a solution to something that annoys me on a daily basis. When working in a team environment I, as i'm sure do many others, often encounter conflicts in the solution file and I'm presented with a mess of XML to untangle in the csproj file.

It seems those clever chaps at Microsoft have listened to the many moans and grumbles about this and have simplified the file in a huge way!

In fact they've completely done away with XML and replaced it with a JSON file that is hugely simple. In fact it's pretty much just a list of the NuGet packages that your application includes.

In keeping with the NuGet theme the .Net Framework has also been completely granularised so each namespace is its own package that can be independently brought into your application.

Finally to coincide with the whole new open principle at Microsoft the .Net framework can now be run completely independent, not only of Visual Studio, but entirely independent of a Windows environment.

Jon Galloway went as far as demonstrating a .net application being developed and built on a Mac using Sublime as the editor.

Amazing stuff from an amazing day... can't wait for next year!

I've uploaded some pictures

Keynote from Lotus F1

 Keynote from Professor Brian Cox

Then on to the Tech Track









You say Origin Master, I say Origin/Master.. let's call the whole thing off.

I had some old changes on one of my git branches today that I wanted to get rid of, they had been staged but I didn't need them. The easiest thing to do was just overwrite the local master branch with what is on the remote.

I tried the command: git reset --hard origin master

To which Git informed me -  fatal: Cannot do hard reset with paths

I figured out the problem was a missing / between origin and master, the actual command I needed was  git reset --hard origin/master which git was much happier with.

This did lead me to question, however, what the difference is between origin master and origin/master?

After some background reading I discovered this is actually 3 things.

origin, the remote repository name.

master, the remote branch name.

origin/master, the cached local copy of the the origin master remote branch.

So in typing: git reset --hard origin master I was incorrectly attempting to overwrite the remote master branch, hence the fatality :)

Using origin/master gets me back to my local branch and back to where I wanted to be with a direct replication of what was current on the origin master remote.

Clear as Mud....

ACL Hell

In my last post I discussed the potentials for System.Diagnostics.Process in a solution implementation we were facing.

In this post I'm not ashamed to admit that I probably went down the wrong path with my System.Diagnostics solution, what I found was, when I attempted to get the diagnostics information from a process through our IIS hosted application, the application didn't have the sufficient access rights to get to the service processes that were running in the system account.

My development was code complete, I merged it with our testing environment branch only to find that the application wasn't returning the process list I was expecting but only a small number of processes that ran at the same access level as my IIS.

I tend to turn ACL off on my local machine as it really annoys me, however, in this case, I made the mistake of not doing early testing on the remote Development environment and checking the impact of the User Account Settings.

At this point i hoped to find a quick fix rather than attempt a complete re-implementation.

I asked the question on Stack Overflow and MSDN Forums.

Both responses weren't what I really wanted to hear, but it was back to the drawing board for me...

I made the decision to create a windows service host that could call a simple ServiceStack service implementation, the beauty being this service would be running at the same access rights as the ones that I wanted to provide diagnostics for.

After some re-factoring and a little new code, I've implemented a solution that works just great. Our Pingdom account calls our web application, the web application calls the SerivceStack service through the client and the SerivceStack service returns all the process information we need for any given service.

I did encounter some late issues around trying to return the System.Diagnostics.Process type as an array, as mentioned in the MSDN response, you can't actually access all of the properties of a process using Sytem.Diagnostics.Process at all times.

When the ServiceStack implementation tried to serialize the data from the process object some exceptions were occurring on properties that can only be accessed when the process has been exited.

The attempted serialization would have invoked a call to the ExitTime Property which defines the moment when the process died. As the process is still running it caused the application to throw a System.InvalidOperationException: "Process must exit before requested information can be determined."

This resulted in me wrapping the .net System.Diagnostics.Process class information with my own lightweight implementation, which did three things, it fixed my error, reduced the information I was returning to the exact information I need currently and finally removed a code smell that stopped me from being able to unit test my code.

Once I implemented my own wrapper with an interface I could mock out the return rather than depending on the Diagnotics.Process class itself which provides a collection of read only properties.

I need to refactor the code out of the final implementation, however, I still think this could be a fun project to work on and fully intent to provide a GitHub solution in the coming weeks.

System.Diagnostics

This week I've been working with the System.Diagnostics namespace and in particular the Process Class.

One of our windows services on our remote servers has a memory leak and we are currently using various diagnostics to try and find the source, in the mean time, as a quick fix, I've extended a system that we currently use to check the health of our distributed servers.

I use the Process Class to pull the Process.WorkingSet64 Property for the designated process and do a check against a sensible max value for the process memory.

The Process.WorkingSet64 property gets the amount of physical memory for the working process, the working set memory is generally what you see in the monitoring tools in Windows Server.

We use Pingdom to continually monitor the state of our servers. Every minute, pingdom hits our servers and and triggers several diagnostics checks. With my new check, In the event the memory starts to become a little on the bloated side, I take the server offline and pingdom will send out a text message to the designated support team to let them know that the server is down.

I'm hoping to create a stand alone re-usable version of this that I can build on in the coming months, once I have something up and running I'll post it on my GitHub account.

Horses For Courses

I recently attended a scrum course with Mike Cohn, Mike is one of the biggest names in Agile and has written several books on the topic:


There isn't a lot that mike doesn't know about Agile, he's worked with teams in some of the best technology companies in the world. 

One thing Mike's course excelled in was stimulating conversation between the scrum masters in attendance. 

There were exercises sprinkled throughout Mikes content that often had me debating with one of the scrum masters on my table more than most, as the course went on i realised the reason this particular scrum master and I had such conflicting opinions was down to the maturity of our teams.

I work in a relatively new team that's gone through several transitions and is still not fully stablised, my counterparts team had been running for 18 months with very few staff changes.

Ultimately each team is individual and the only right answer on a lot of process specific questions is the answer that's right for the team.

Newer teams require more structure and a more metric, fact driven, approach to development and much more leniency when it comes to estimation. Once a team becomes more mature, more trust exists in the process and they can tweak the factors to meet the needs of the players.

With any new team it's always important to give them room to find their way and not to assume whats worked for one team can be replicated elsewhere.


Anchors & Propellers

So i'm not long back from a trip where I spent time working with our development team based in Singapore, we undertook a week long session with ThoughtWorks where they provide their 6 monthly Business Agility Review.

One of the first exercises we encountered was an exercise where we took two large sheets of paper and added the heading Anchors to one sheet and Propellers to the other, the development team were then given post-it notes and asked to provide examples of

Anchors: Things that are holding the team back and weighing down the development process, general negatives.

Propellers: Things that are propelling the team, increasing the productivity and agility, general positives.

Each of the team members gave their opinions and stuck them to either the anchor or propeller page.

Once all of the teams opinions were collected we did some groupings of similar suggestions and gave them general headings that encompassed the overall point. For example, we recently switched to Git as our source control system so several mentions of this were grouped under the heading of Git.

The result was some single post-it notes and some grouped post-it notes, the team was then given 3 votes per developer and asked to vote for the areas they feel were of the biggest impact.

At the end of the voting exercise we then took the 3 highest voted for points and did a dissection on what could be done to improve the anchors and ensure the propellers keep propelling.

In this instance the exercise was based over the last six months but it's certainly something that can be done by teams on a more regular basis, perhaps in addition to the regular sprint review techniques.

I'll certainly be bringing this in a fixed intervals to try and monitor the teams overall thoughts on process.


This isn't a scrum team

There's currently a huge problem in the software development industry, it's something that I've encountered a lot of recently whilst being involved in building a new development team and something that I've felt many developers end up with a fundamentally incorrect view of.

The problem is the understanding of the difference between Scrum and Agile.

Let's turn to wikipedia for a helping hand in a clear definition of both
 
Agile software development is a group of software development methods in which requirements and solutions evolve through collaboration between self-organizing, cross-functional teams.

Scrum is an iterative and incremental agile software development framework for managing product development.

On the surface the differences between these two are subtle, the key being agile development defines a group of software development methods; scrum is a fixed framework that is fairly basic in structure yet complex to implement in a correct manner.

I can't tell you the number of times I've seen developers complain about something not being "proper scrum" and then go on to explain the incorrectness of the story points system, or how the user stories don't have enough information to estimate.

These are nothing to do with scrum, these are Agile concepts pulled from XP and other agile methods.

Scrum involves, players in the game, Product Owner, Scrum Master, Dev Team. In addition to this a software development structure; the sprint, sprint planning, sprint review and the daily scrum.

That's it, that's scrum.

Feel free to verify this and re-familiarize yourself with these concepts:
Scrum Guide

The rest is up to the individual scrum team to figure out, it's down to them to try out other Agile methods like story points for prioritisation and user stories for requirements gathering, the team can then figure out for themselves what works for them.

So developers take note, the next time you complain about the scrum process just make sure that's actually the problem and not your understanding of this quite specific methodology and the rules it's played by.