SOGETI UK BLOG

Mobile phone

In my earlier blog post I discussed the purpose of DevOps, namely reducing the time and effort it takes to put software into production and allowing for more (continuous) production deployments. Why is this important? The typical answer is that in the digital age, SPEED is the name of the game and it is important to be able to increase the speed to market of IT solutions. Doesn’t everyone expect cool new features for their favorite software to be available on an on-going basis? Doesn’t “the business” require enhancements and new capabilities in business applications to be in production as quickly as possible? So great, DevOps (and Agile) should give us this speed to production. But how is success measured? Increased frequency of releases? Reduction in the amount of time it takes to perform a deployment? Reduction in the lead time from the start of a “project” to the production release of this project? Levels of automation? All these are certainly valid ways to measure progress in DevOps, but they are not the most critical “macro” metrics that tell us if we have improved SPEED in IT. These are  WIP and Throughput.

Let’s start with WIP. WIP stands for Work in Process and represents the sum of all the cost tied up in the entire IT department forall ongoing projects and feature development efforts that have been started but not yet been put in production. So, this is really all the capital tied up in the system at any given time trying to produce software. If a decision was made to halt all development activities, this number would represent all the investment dollars that would have to be written off. Any time a new project starts or any time effort is expended on existing projects WIP is increasing. Any time a project or feature set is put in production WIP is reduced by the amount of the investment made on this project or feature set. If a project is cancelled mid-stream, WIP is similarly reduced by a write-off of the corresponding investment.

Since WIP represents tied up capital, it is good to have low WIP, and bad to have a lot of WIP. Increasing the release frequencies and thus allowing to have (parts of) projects put in production earlier, reduces WIP and thus frees up working capital. There is a long way to go, because most IT organizations have a lot of WIP, sometimes up to a full year of productive capacity of IT. Because of internal resource dependencies between projects, having a lot of WIP also causes poor throughput.

Throughput is the amount of WIP that is put in production in a given period, usually a month. If you have high throughput, you are not only putting a lot of projects or features in production in a given period, you can also start many new initiatives or projects without increasing WIP. So high throughput is good and low throughput is bad. Obviously, increasing release frequencies will increase throughput. Again, there is a long way to go, since most IT organizations have very low throughput (and high WIP) and if you have high WIP and low throughput, it will take a long time to produce anything.

Which now brings me back to the purpose of DevOps and Agile, which is to increase SPEED. Speed is gaged by looking at throughput and WIP. Measure these and you will know if your DevOps initiative is actually helping accomplish your goal or just another excuse to add a couple of cool tools to your inventory.

 

Kasper de Boer AUTHOR:
Kasper de Boer is a Vice President in Sogeti US, where he is currently responsible for the Infrastructure Practice. Kasper has 25 years experience in IT Consulting and is particularly interested in IT organizations and how to make these more efficient and effective.

Posted in: Automation Testing, DevOps, Digital strategy, Innovation, IT strategy, project management, Research, test framework      
Comments: 0
Tags: , , , , , , , ,

 

Builder_Tomas_Jussing_LOWRES

DevOps is the new thing in IT. Many of our clients are either considering implementing DevOps or are well underway. For the moment, I’ll leave aside the question what DevOps actually is, because there appear to be as many interpretations of this as there are implementations.  The purpose of DevOps however is pretty clear: reducing the time and effort it takes to put software into production and allowing for more frequent production deployments. It is the companion of Agile but tackles the speed problem from the back-end of the SDLC (deployment, QA, environment control) rather than the front-end (requirements analysis, code development, integration testing).  Where Agile seeks to bring together business owners, developers, architects and testers, DevOps seeks to bring together final QA, environment operations, DBA, code management, and deployment operations.

It is obvious from the purpose of DevOps that to improve speed you will need to improve the level of automation in any of the steps involved in the deployment processes. And in order to improve levels of automation, tools are needed. So, what tools should you have? Certainly you’ll need test automation, plus you will need tools to automate the commissioning and de-commissioning of compute environments in your data center or the cloud, you will need automated source control, coupled with the ability to do “push-button” deployment to target environments, you will need a CMDB and a change control environment coupled with workflows, etc. etc. Ideally, all these tools need to be integrated with each other to “seamlessly” work together, and in turn, be integrated with the front-end development tools. You will now understand why DevOps is a technology consultant’s nirvana. Years can be spent evaluating and integrating all of this stuff, requiring expensive people with point expertise in all of these tools.

But wait. Perhaps none of this is a pre-requisite: As in any process, actual processing time is only a fraction of total lead time. So, rather than trying to get processing time down, an initial DevOps implementation should probably focus on all the non-processing time first. Reducing non-processing time is largely a matter of re-thinking how the entire process is organized, all the different organizational entities involved in the process and how to re-arrange these pieces for maximum speed and efficiency. For this, you will not need any tools beyond what you already have in house.

So, now that you know you can start to implement DevOps without having to buy new tools, what’s remaining is how to measure success of your DevOps initiative. I have a different perspective on this from what is normally presented. Stay tuned to my next blogpost to read all about it.

 

Kasper de Boer AUTHOR:
Kasper de Boer is a Vice President in Sogeti US, where he is currently responsible for the Infrastructure Practice. Kasper has 25 years experience in IT Consulting and is particularly interested in IT organizations and how to make these more efficient and effective.

Posted in: architecture, Automation Testing, Cloud, Developers, DevOps, IT strategy, Quality Assurance, Software Development      
Comments: 0
Tags: , , , , , , , ,

 

Cloudmigration-5

It is common wisdom that the main driver for organizations to move to cloud is speed, not cost.  Speed to provisioning, the flexibility, ability to allow for bursts, IT agility, etc.

Despite the “need for speed”, cloud services remain hard to tap into for large corporations with huge legacy compute facilities. And why would they want to anyway? They already have scale in their own data-centers and they fear that the cloud may introduce bunches of security and compliance headaches they don’t need. I recently attended an event with many CIOs of banks, insurance companies and other large organizations.

They all told me that their “internal” prices were lower than what cloud would provide, unless they had workloads that they could turn on and off and take advantage of the minute by minute accounting of the cloud. Since most of their workloads are 24×7 this does not really apply to them other than perhaps in development and test situations.

But have they really checked the pricing lately?

The cloud is maturing. Prices are falling. The competitive landscape is expanding (for those of you that still think the cloud is Azure or AWS). It has now become ridiculously easy to commission compute and storage from a myriad of providers. Here’s an example of a French company (www.scaleway.com) where I recently commissioned a physical bare metal server in order to run a simple tool. It goes without saying that it was a fully automated experience. Up and running in less than a minute and billed by the minute at .006 euro cents. And of course 99.9% uptime SLA, plus support:21stoct-1

Here’s my invoice for September:

21stoct-2For comparison, see below Azure pricing for virtual compute power. Arguably, few organizations pay list price, but it starts to look “expensive” at 10 to 20 times the price of an arguably better physical box!

21stoct-3

Cloud storage is getting very affordable as well:

21stoct-4jpg

Given the free-fall of prices for storage and compute, it is clear that it will be very difficult to compete with dedicated cloud providers not only as an internal IT department, but also as a full service provider that seeks to provide cloud services out of their own data-centers.

Our internal data centers think they are price competitive with simple compute of $70/month and storage for $150/month/terabyte. So let’s use this as a benchmark for a cost-competitive stance for an internal organization and apply some simple math:

Let’s say you are a mid-size organization with 5,000 simple servers equivalent compute and 2 petabyte of storage needs. Assuming you run a very efficient shop and your internal cost are half of what our datacenter cost are: $35/month/server and $75/terabyte/month. That equates to $2.1M for compute (35*5000*12) and $1.8M for storage (75*2000*12) per year. You can get this same stuff for 300K (5*5000*12) for compute and 120K (5*2000*12) respectively if you move aggressively realizing almost90% savings!!

These calculations are aggressive and of course there are lots of other factors to consider. Nevertheless, 50% or more savings is not something any CTO can ignore.

So, I predict that we will see that cloud movement will be driven primarily by cost, getting the speed advantages in the process.

 

Kasper de Boer AUTHOR:
Kasper de Boer is a Vice President in Sogeti US, where he is currently responsible for the Infrastructure Practice. Kasper has 25 years experience in IT Consulting and is particularly interested in IT organizations and how to make these more efficient and effective.

Posted in: Azure, Cloud, Data structure, Infrastructure, Innovation, IT Security, IT strategy, Security, Software Development, Test environment, Webinars      
Comments: 0
Tags: , , , , , , , , , , ,

 

productivityIn general, IT is not typically described as a ‘productive’ area of the company; things get delivered that no one can remember having asked for, and the ones people can remember requesting are viewed as being produced too slowly.

In fact, measuring productivity in IT is possible but it can get complicated. For one, you probably need a function point expert to be able to objectively count the “stuff” that IT produces. Since this is hard and not all that meaningful with regard to the productivity of the labour involved (a function point in a modern CRM system is a lot easier to produce than that same function point on the AS/400 and other, older systems), we usually stick with overall indicators such as IT spend/revenue or IT staff/total staff etc.

Productivity metrics you rarely hear being discussed in IT are “work-in-process (WIP)” and “throughput”, and yet, these are highly indicative of IT productivity. High work-in-process means high cost tied up in projects that do not produce benefits, and low throughput means projects take longer than necessary to deliver; if you suffer from both, then you have low productivity.

Let me illustrate this with an example derived from real numbers from one of my clients:

– The IT department of our example company has around 65 people in their development group that are able to produce about 10,000 hours of labour per month.
– They run roughly 30 projects simultaneously at any given time, allocating partial resources to multiple projects.
– Each project consumes about 4,000 hours of effort (EAC) on average.
– Throughput for this organisation would be 12 months (i.e. on average it will take the IT department 12 months to complete a given project: 10,000/30 = 333 hours of labour per month per project; 4000/333 = 12 months project duration).
– WIP will likely average around 6 months of labour, assuming projects are in different stages. And 30 projects can be completed in a year (120,000 hours of labour).

If my client ran just 10 projects concurrently instead of 30, throughput would be reduced to 4 months (10,000/10 = 1000 hours per project per month; 4000/1000 = 4 months per project). Three times faster delivery!!
And WIP would be reduced to roughly 2 months worth of labour (one third!). Even still, the same 30 projects would be delivered in the year.

In both cases 30 projects are delivered in the year. However, the total cost in the second case is dramatically lower, since less capital is tied up in WIP and benefits are greater since 8 months of productive use can be gained for each project put in production.

Therefore, the IT department can actually be a lot more productive by reducing the number of projects they work on at a given time. Interestingly, if organisations actually implemented this change, they would most likely see that individual productivity would also increase along with the number of projects that could be completed. This is due to dramatic reductions in project overhead and task switching embedded in the multiple project model.

So, for the CIO’s out there who run a classic job-shop: check how many projects you have going on, how much WIP, and see what happens to productivity if you cut the number of active projects in half.

One final thought: Reflect on this for a moment and you’ll realise that this is the essence of Agile Transformation. Forget Scrum, Stand-up meetings, Sprints, Burn-down charts, Backlogs, etc. – these are all mechanisms to improve throughput and reduce WIP, hence improving IT productivity.

Kasper de Boer AUTHOR:
Kasper de Boer is a Vice President in Sogeti US, where he is currently responsible for the Infrastructure Practice. Kasper has 25 years experience in IT Consulting and is particularly interested in IT organizations and how to make these more efficient and effective.

Posted in: Behaviour Driven Development, Big data, Business Intelligence, Performance testing      
Comments: 0
Tags: , , , , , , ,

 

roi-social-mediaThere’s a book titled “The Knowing-Doing Gap” describing why so many organisations fail to take action on ideas and knowledge that they already have. They know what to do, but don’t do it. Kind of like how many of us have no excuse for not dieting and exercising in our personal lives. Whole swaths of “best practices” fall in this category.  So, here’s one such basic best practice: Measuring ROI.

Occasionally I get to go to a CIO/IT Leadership conference. Invariably someone will raise the topic of business/IT alignment. Then, someone else will remark on how unfortunate it is that “in their organisation” IT does not audit or review past projects to determine if the business benefits that were projected are indeed being realised. As a consequence, IT’s contribution to business success remains shrouded in mystery.

Before addressing the question of whose responsibility measuring ROI actually is, I’d like to spend a few moments reflecting on this idea in general.  According to Gartner, each year, on average, companies spend roughly $12,000/employee on IT, about 65% of which is to keep the lights on, with the remaining 35% supporting transformation and growth initiatives. So, an enterprise with 10,000 people will spend around $42 million on new ‘stuff’. I think we can all agree that not tracking if this magnitude of investment achieves the desired ROI and/or planned intangible benefits, is likely not a “best practice”.

So, now that we agree that this should definitely be tracked, the question is who should do the tracking? Clearly IT has a vested interest, because the numbers should show the value IT delivers. But actually, since the investment was made on behalf of a business sponsor, the business sponsor is also accountable for measuring and realising the ROI.  Presumably, he or she presented a pretty good case in the IT Steering Committee or similar governance body to make sure that, of all the presented initiatives, this particular one got funded.

The measurement part itself is not too difficult. I recommend organisations stay away from attempts to isolate the IT contribution and instead simply measure the success of the overall program according to the metrics originally proposed in the business case. Did sales go up? Did personnel get freed up? Did cost savings materialise? Did customer satisfaction increase? The business case should provide ready-made success metrics that allow you to attribute these effects to the particular initiative. Track these numbers on an annual basis for a minimum of three years (or as long as the business case projected) with intermediate numbers on 6 month intervals if needed. While past performance is no guarantee for future success, the business sponsors that can prove they were good stewards of the investment funds awarded to them in the past, will have an easier time securing funding for future initiatives.

To make all this work, it is essential to have in place a governance body, such as an IT Steering Committee including senior representatives from the business and the CIO, that oversees and acts as a steward of the investment portfolio. ROI is clearly a metric – and not just projected ROI. In some cases, the Committee may appoint separate people to measure benefits, or to streamline the measurement processes, but this is not a requirement. What is required is that the business sponsor is held accountable.

And for those of you that hear people say that “their organisation doesn’t measure return on past (IT) investments”, you should probably take note!

Kasper de Boer AUTHOR:
Kasper de Boer is a Vice President in Sogeti US, where he is currently responsible for the Infrastructure Practice. Kasper has 25 years experience in IT Consulting and is particularly interested in IT organizations and how to make these more efficient and effective.

Posted in: Collaboration, IT strategy, SogetiLabs, Transformation      
Comments: 0
Tags: , , , , , , , , , , , , , , , , , ,