This developer's chronicles

Web development using Scrum, C#, AngularJS and the like

Month: January 2015 (page 1 of 2)

Why Continuous Integration (CI) is the way to go

Continuous Integration (CI) is one of the software practices/techniques I recently had the chance to learn and play with at the SSW’s FireBootCamp and which has completely changed the way how I see software development and how I want to craft it moving forward.

If you feel yourself identified with some of the below items you should be looking into incorporating CI practices into your software development pipeline:

  1. You don’t have a source control system.
  2. You get a copy of a project from your control system but it won’t run because there are missing files or the build is broken.
  3. You rarely check-in your code.
  4. You never get the latest version from your source control system before checking in your code to make sure any new changes made by other developers don’t break your code (or even worse, to make sure YOUR code won’t break the build)
  5. You don’t write unit tests
  6. You don’t write integration tests
  7. You do write unit/integration tests but don’t run them against your code AND the code that sits on the source control repository.
  8. You check in a task/feature only when fully completed (after days/weeks of working on it) only to realise it won’t work on UAT/staging/production

Any of the above will result in painful integrations, poor quality code (usually off-schedule), with many many bugs and with the overall experience that software development sucks (without mentioning what the end user/customer will think about the product/us).

Is there a way out?

Yes !, its called Continuous Integration and it’s relatively easy to implement (I haven’t personally had much experience to be honest but I can see the many benefits from it and the steps seem simple enough)

It’s whole concept revolves around the idea of improving your development practices by encouraging you and your team to:

  1. Get the latest version of the shared code before checking in your code.
  2. Run unit and integration tests against your code.
  3. Check-in your code regularly (at least daily).
  4. Have a build automation system.
  5. Have the shared code built by your build automation system
  6. Have the shared code verified by running automated unit and integration tests against it
  7. If any of the above steps fail then have the build automation tool inform you so you can fix it and start the process again.

Simple, huh? to me the greatest advantage of following this process is that you don’t need to spend separate time trying to integrate a newly built feature into your system; by checking in your code and fixing problems along the way the ‘integration’ becomes an inherent task that of the process, i.e. no need to wait for months to realise that two complex features that you and your team were working on did not integrate properly and now you have now to stay back and fix it rather than going to the pub with your friends :(.

Some others (equally important) advantages are:

  1. You always have a working version of your product (which could potentially be a shippable product)
  2. You get immediate feedback about any problem that arose as a consequence of the new code checked-in.
  3. Greater visibility of the current status of the product
  4. Isolating and tracking bugs becomes easier as it’s often related to the new code.
  5. Reduces the number of bugs in your application as you fix them sooner rather than when the customer rings you a month later.
  6. Improves communication with your team

There you go, start incorporating a continuous integration process in your software development practices if you haven’t already done so.

By the way, there are some great blog posts about this subject that I have listed below:

Do you know of any other advantages that Continuous Integration provides?

Continuous Delivery with Visual Studio Online in Azure

Today at the Week 2 Day 3 of our bootcamp we had Microsoft developer Evangelist Andrew Coates talk about building cloud solutions with Azure.

Some of the big takeaways from the talk were:

  • Developers need to be lazy: don’t reinvent the wheel, instead add value to your company by coding only what you can code (for instance don’t spend too much time working on form validation as you can have a libraries or someone’s code to do that, instead spend it coding stuff that only you know, such as solutions to your business needs etc)
  • Microsoft Azure offers this huge range of solutions / services that span across Infrastructure as a Service IaaS, Platform as a Service PaaS or Software as a Service SaaS (not only Microsoft but companies such as Amazon too) so you can concentrate on the important stuff and let them worry about keeping everything else up to date.
  • You can scale up/down or scale out according to your needs (this can be automatically triggered if certain conditions are met, such as the processing power of a machine X reaches certain threshold etc).
  • You can have Continuous Delivery done by combining an Azure website with Visual Studio online so that anytime you check-in your code Visual Studio will create an automatic build for you and if it passes (including running unit tests if available) will deploy it to your Azure environment (UAT, Staging, production, etc)

We also had a practice lab where we could put all that theory into practice, the screencast of a continuous delivery with Visual Studio Online in Azure can be watched below:

 

Thanks for reading / watching

Harold

Models and bounded context in DDD – Domain Driven Design

This blog post briefly touches on some core concepts of domain-driven design and its built upon the idea of a model and its context.

A model in domain-driven design terms is an abstraction or a group of abstractions of a business model, a set of functionalities and its interactions that need to be implemented and that once completed can be used to solve problems related to such domain.

As models grow in size and complexity (or if they are large in nature) they need to be broken down into more specific models, bounded by the specific context they live in (an easier way to see this is from the end-users point of view), for example a data-entry system that is used by a call center team will give a specific context to the model and its terminology may only make sense to them and not to anyone else outside that context. This is referred to as Bounded Context (BC)

Bounded Context is an important concept in domain-driven design as it provides models with boundaries and their interactions, which in term aids isolating and facilitating its design.

When should I use ViewBag, ViewData or TempData to pass additional data to my MVC Views?

When passing additional data from an ASP.NET MVC Controller to a View there are several options that are available, you can pass one, two or basically any combination of the below:

  • ViewBag
  • ViewData
  • TempData

So which one do you use?. Well, before answering that question let’s have a quick look at what each one of them are and how you can benefit from them.

ViewBag

ViewBag is a dynamic object, which means that you can add to it anything that you please (strongly typed objects, primitive values, etc). This sounds really cool !

ViewData

ViewData is a ViewDataDictionary, kind of a dictionary of objects that can be accessed via a key (string). As ViewData also hold objects it is considered very similar to the ViewBag but it differs from it in that you need to do typecasting in order to get the original object back (and you’ll obviously need to check for null references before accessing its properties).

The following example will help clarify the above. Let’s say you want to pass some data from a Controller to its View and you decide to take advantage of ViewBag and ViewData to achieve this:

 

And now let’s get the data back from the View:

See the difference ? you can use either of them but make sure that with ViewData you do typecasting and check for nulls.

 

TempData

As with the ViewData, TempData is also a Dictionary (a TempDataDictionary to be precise) so you could add an object to it an access it via its key (correct!, doing typecasting and always checking for null references), the difference with ViewData is that TempData keeps its data for the duration of the HTTP request so its  useful when passing data between requests (for instances when a redirection occurs)

 

The following example shows how we store an object inside TempData and then retrieve it after a redirection has taken place

 

Hope this helps someone out there, if you have any questions or feedback please fill out the below form and I’ll get back to you pretty quickly !

 

cheers

Harold

How to use HttpClient with basic authentication to POST data asynchronously

Today I started using HttpClient with basic authentication so I could consume some Web API services (receiving and POSTing data). the first problem I found after reading the Microsoft documentation was that there was no indication on how to use it if you had to authenticate yourself against the API prior to start receiving some data (I’m assuming most APIs out there would require authentication, right?).

After browsing for a while in Stack Oveflow and trying different approaches I found that you can pass your credentials (ASCII encoded) inside the DefaultRequestHeaders.Authorization header value.

The following is an example of a unit test created to test the PostJsonAsync action (the message body of the request is a serialized Json object but you could pass a well formed json string as well)

If instead, you would like to pass a query string check out this blog: How to call HttpClient.PostAsync with a query string

 

Thanks for reading

Harold

Older posts