SOGETI UK BLOG

Following the release of Mastering Digital Disruption with DevOps, the fourth report in the Disrupt series from Sogeti’s Trend Lab VINT, I wanted to share some insights into how we can all welcome digital disruption, risk and uncertainty, and bring about the innovation needed to create true digital enterprises.

Main Business Challenges

The microservices concept is not really anything new – it is the implementation that has evolved.  The main business challenge will be transitioning away from SOA to microservices, without suffering culture shock.  It is a disruptive technology.

The idea is to break down traditional, monolithic applications into many small pieces of software that talk to each other to deliver the same functionality.

The concept of microservices has garnered significant attention recently, so, inevitably, the question has arisen: What’s the difference between a SOA and a microservices architecture? If you’ve tried searching the Web on that subject, you’ll find comments like:

 

  • “Microservices are SOA done correctly.”
  • “SOA is a lame enterprise approach, whereas microservices are a cool hacker approach.”
  • “Microservices architecture is a specialization of SOA.”
  • “The SOA paradigm left quite a bit open for interpretation.”
  • “Nothing.”

There are elements of truth in all of this, even the “nothing” comment, depending on your viewpoint. The problem is that just about everyone has a different viewpoint on SOA, based on what they had in their enterprise, which vendor’s technology they used, which consultants they worked with, which bloggers they followed, etc.  OASIS (Advancing Open Standards for the Information Society) established a reference model to create a consistent definition that everyone could use.

What are microservices?

The Wikipedia definition of microservices sums it up like this: “In computing, microservices is a software architecture style, in which complex applications are composed of small, independent processes communicating with each other using language-agnostic APIs. These services are small, highly decoupled and focus on doing a small task.”

Although the concept of microservices is broader than the underlying platforms or tools it uses, IDC acknowledges that the advancements in software development and deployment methodologies—namely DevOps and container technologies—has played a pivotal role in elevating microservices to where it is today.

The transition from a traditional hardware server infrastructure in an on premise datacentre to a cloud-based, software-defined infrastructure that can be created, modified, or removed programmatically offers significant benefits for organizations. The microservices architecture enables companies to be much more agile and cut costs at the same time.

Microservices is an approach to application development in which a large application is built as a suite of modular services. Each module supports a specific business goal and uses a simple, well-defined interface to communicate with other modules.

Microservices came about to help solve the frustrations developers were having with large applications that require change cycles to be tied together. In a monolithic service-oriented architecture (SOA) deployment, each small change meant that the entire monolith needed to be rebuilt and this, in turn, meant that re-builds weren’t happening as rapidly as they should have. In a microservices architecture, each microservice runs a unique process and usually manages its own database. This not only provides development teams with a more decentralized approach to building software, it also allows each service to be deployed, re-built, re-deployed and managed independently.

This will give those who lived through component-based software, web services and service-oriented architecture (SOA) in the early 2000s a sense of déjà vu.

They were meant to do something similar. So what’s the difference? “Microservices are much lighter-weight than SOA, with all of the standards which that entailed,” says Ben Wootton, co-founder of Seattle-based DevOps consultancy Sendachi.

SOA was a supplier-driven phenomenon, with an emphasis on complex enterprise service buses – a form of middleware needed to communicate between all the services.

“Message standards are looser and are exchanged over lightweight message brokers,” says Wootton. “The tooling has evolved from the open source community rather than big enterprise.”

microservice arch

Speed and agility Companies are interested in microservices because they can bring speed and agility and encapsulate relatively small business functions, says Wootton. A currency conversion service is a good example, or an e-commerce shopping cart.

Companies can develop services like these more quickly and can change them more readily, because they are dealing with smaller code bases. This is not something that traditional, monolithic applications with code millions of lines long were designed for.

To innovate in customer engagement and to drive and adapt to digital disruption, enterprises must continuously change at increasing velocity.

Application development and delivery professionals must increase their velocity too, while also maintaining – better yet, improving – the quality and resiliency of what they deliver.

Contrast microservices with monolithic applications. The distinction between them is, first of all, one of deployment, but it may also be a design distinction.

For example, a Java-based web application can be written as a collection of well-designed, modular Java classes. However, these classes are not designed for independent deployment, so all of them get packaged into one large file for deployment. Microservices refactor an application into a series of smaller, separately deployable units.

There are four major benefits to microservices. First, when individual parts of a system are separately deployed, they can also be separately scaled. A system might need 20 running instances for one service and only two instances of another. Wal-Mart credits Node.js-based microservices for its ability to handle Black Friday volume spikes when other retailers had issues.

Second, separately deployed services can be implemented using entirely different platforms or design models. One service might need a huge in-memory database and a specialised programming language, while another might have a very small footprint and be written in JavaScript on Node.js. The order history service of Amazon’s retail store takes this principle further, using different implementations for recent orders and for older orders.

Third, when services are being run separately, it is more difficult for the whole system to go down at the same time, and a failure in processing one customer’s request is less likely to affect other customers’ requests. This is particularly true since each service’s operation can be optimised for its specific scale, performance, security and transaction management requirements. Netflix, for example, ensures resilience by periodically knocking out its own production services to be sure that the whole keeps running.

And fourth, microservices also offer more options for incremental development and deployment. Agile methods – and particularly continuous delivery – can move more quickly when parts of a system can be built and deployed independently. But even with waterfall development, a bug fix is easier to deploy with microservices. Prior to production deployment, smaller units of development allow for smaller tests, more frequent testing, and more options for testing and more frequent feedback.

Sogeti, Microservices and Digital Transformation

Sogeti’s Digital Transformation Services will shorten the time it takes to reduce your Digital Debt and create the foundation for the New Style of IT.  Thereby enabling you to implement a Microservice Architecture in a greatly reduced time frame.
IT pic

 

hybrid model

Microservices combine the benefits of SOA and DevOps to allow an enterprise to deliver faster updates and new features. But enterprises also face a number of key challenges in making this type of architecture work well.

From a Devops perspective, the microservices discussion is usually related to Docker or other containers. Both share an emphasis on lightweight individual units that can be managed independently, so containers are often regarded as a natural implementation choice for a microservice architecture.

When developers change a traditional monolithic application, they must do detailed QA testing to ensure the change doesn’t affect other features or functionality. But with microservices, developers can update individual components of the application without impacting other areas. It’s still necessary to test a microservices-based application, but it’s easier to identify and isolate issues, which speeds development and supports both DevOps and continuous application development.

Transitioning to microservices should be a gradual process, and refactoring only pieces of existing applications—without a full transition—can also bring benefits. To achieve success with microservices, organizations must first build a well-designed application according to existing platform standards, then refactor the application into a collection of microservices as necessary to meet business needs. With the right people, processes, and tools, microservices can deliver faster development and deployment, easier maintenance, improved scalability, and freedom from long-term technology commitment.

We recommend that you assess your application life cycle management maturity — specifically around your deployment processes — and seek tools that can help automate the implementation of these processes as they are now, and as you intend them to be across multiple development and operations teams and platforms. Please see DevOps, ARA and Disruptive Innovation for an insight into Application Release Automation.  Helpful next steps include:

  • Establish requirements for applications to narrow the scope of evaluation targets and to determine whether one tool or multiple tools will be required.
  • Prioritize integrations with existing development and ITOM tooling (especially cloud infrastructure or cloud management platform tools) in product evaluation criteria, with an eye toward using these tools in your broader provisioning and configuration environment.

If you are interested to discover how these ideas can become a reality for your business, you can download Design to Disrupt: Mastering Digital Disruption with DevOps here or give us a call on 0330 588 8200.

External References

Microservices define the next era of IT

Are microservices a magic wand for legacy software modernization?

Prepare developers for these microservices pros and cons

Adopt a microservice architecture with these five tips

How are microservices and cloud computing related?

Divide and conquer in software architecture

Microservices: Small parts with big advantages

What DevOps Needs to Know About Microservices

ACHIEVE A SUCCESSFUL MICROSERVICES ARCHITECTURE

Robert Boone AUTHOR:
A hands-on, pragmatic ISEB certified Test Automation Manager with over ten years of automated test tool and automation development/testing experience.

Posted in: Agile, Application Lifecycle Management, architecture, Business Intelligence, Cloud, communication, DevOps, Enterprise Architecture, Infrastructure, Innovation, integration tests, IT strategy, project management, Quality Assurance, Rapid Application Development, Research, Security, SOA, Software Development, SogetiLabs, Technology Outlook, Test Tools, Testing and innovation, Transformation, Transitioning, User Experience, User Interface, waterfall      
Comments: 0
Tags: , , , , , , , , , ,

 

Following the release of Mastering Digital Disruption with DevOps, the fourth report in the Disrupt series from Sogeti’s Trend Lab VINT, I wanted to share some insights into how we can all welcome digital disruption, risk and uncertainty, and bring about the innovation needed to create true digital enterprises.

Main Business Challenges

Many businesses operate large and complex IT environments with a variety of distributed applications installed, configured and supported in different ways.

As Thierry Hue of DynamicIQ puts it, the challenges faced by Business today fall into three areas:

  • Operational Risk. It is difficult to ensure that applications are stable and that changes are applied efficiently and effectively. This is even more pronounced when dealing with highly configurable applications running on multiple environments which require a high frequency of changes. As a result, mistakes are often made when releasing a new version of an application, which leads to unnecessary production incidents.
  • Delivery Delays. It is important to deliver new functionality and bug fixes as rapidly as possible. However, installing and updating applications can be time consuming resulting in long delays in implementing change.
  • Resource Drain. Excessive resources are wasted on changes that should be easy to achieve, but which consistently take too much time and effort. These resources are diverted away from value-added activities.

As organizations look to ramp up their ability to deliver applications more and more quickly, unreliable deployments are an immediate bottleneck. Manual deployments based on outdated instructions, or semi-automated deployments using incomprehensible and unmaintainable scripting, are error-prone and time-consuming. Development and testing is stalled, causing frustration and delays.

The age of cascade development and long drawn out software projects is in decline. Businesses have a need for continuous delivery of new functions, and this requires a far more streamlined approach from IT.

Agile development methodologies applied pressure on delivery pipelines by introducing smaller, yet more frequent updates. Then came the business demand for faster and more frequent releases into production due to competitive pressures and market disruption.

Digital Disrupter Traits

Automation vs. Manual

Using the rough and ready definition of manual deployments, it is easy to see where they fail. It is safe to say that any deployment is manual if it is characterised by operators logging into various machines and following written scripts, but this is generally:

  • Slow and inconsistent
  • Prone to error and failure
  • Lacking in visibility and traceability

This final criterion is more complex as it requires a better understanding of what happens to a deployment once the process is “complete.” This is because a deployment process is not a separate and independent event in time; once the deployment is run, team members still need to know what was deployed where, why it was deployed, and who performed the deployment—especially if a failure occurred. For example, if a push-button deployment exists but the system does not provide visibility into the process, team members have to sift through logs on numerous machines to track down the error—which is not just manual but also laborious.

Manual deployments based on outdated instructions, or semi-automated deployments using incomprehensible and unmaintainable scripting, are error-prone and time-consuming. Development and testing is stalled, causing frustration and delays. But even now some companies are still hesitant to automate.

Whether by automation or other means, the mechanisms to deploy the correctly configured service components should be established in the release design stage and tested in the build and test stages. Automation will help to ensure repeatability and consistency. The time required to provide a well- designed and efficient automated mechanism may not always be available or viable. If a manual mechanism is used, it is important to monitor and measure the impact of many repeated manual activities, as they are likely to be inefficient and error-prone. Too many manual activities will slow down the release team and create resource or capacity issues that affect service levels.

  • Manual deployments are inherently slow and error-prone.
  • Deployment automation used only in development or only in operations may help one silo, but leads to a hand-off where changes to the process may be insufficiently communicated.
  • Automated deployments provide superior audit trails.
  • An automated deployment infrastructure provides a framework to build upon. Additional activities such as automated functional testing can leverage the deployment infrastructure.
  • Deployments standardized across environments must still take into account environmental differences. The environment configuration is a key concern.

This situation seems to lead to a single question, “Is automation really worth the time and effort?” If you ask teams with automated deployments, the short answer will most likely be similar to, “Yes, but it depends on what you mean by automated.” As it turns out, a scripted process is not really an automated process. For those in the business of providing automation, a much higher threshold is set: If the “automation” does not address the patterns of deployment failure, then the process is not automated.

Manual deployments are slow, painful, and fail in production; yet, they are still extremely common in the IT industry. Why? A simple answer may be that until recently, powerful deployment automation tooling was not available. However, even today, there is a great deal of resistance to automating deployments. Deployment teams remain comfort-able with their existing practices. Because they are at the keyboard executing the manual steps, they feel in control.

Manual deployments are broken and cannot be saved by more disciplined deployment engineers. The rote work of executing a deployment should be delegated to an automated system that can execute a process consistently. However, not all automated solutions are alike. Whether you are looking to buy or build a deployment automation system, there are some “must have” goals to keep in mind. A mature deployment system should be characterized by:

  • Reliable, highly successful deployments—especially in production
  • Easy deployments—encouraging teams to take new builds faster
  • Fast deployments—allowing early test environments to be on the newest build as possible
  • Complete audit trail—spanning across all environments
  • Robust security and separation of duties—controlling who can do what, when, and where

To achieve these goals, an automation solution should contain at least a few key elements:

  • The entire deployment should be automated
  • The deployment should target specific environments and automatically adapt itself to a target environment
  • The files being deployed should come from a controlled artefact repository
  • Security and visibility should underpin the entire system

Digital disruption refers to changes enabled by digital technologies that occur at a pace and magnitude that disrupt established ways of value creation, social interactions, doing business and more generally our thinking. Digital Disruption can be seen as both a threat and an opportunity.

The Solution?

According to Gartner, by 2018, 50% of global enterprises will implement application release automation as part of a DevOps initiative, up from less than 10% today.

Providing fast, dynamic support to the business through the provision of continuous delivery of incremental functionality should be the top priority for any head of IT. However, just speeding up the flow of code from development-to-operations is not enough: it has to be fully managed and audited, ensuring that new apps, enhancements and updates are ready to do business. A properly implemented DevOps strategy, led top-down by the business, can enable organisations to be more competitive by delivering richer services to their customers faster.

Things to Remember:

Well-planned and implemented release and deployment management will make a significant difference to an organization’s service costs.

A poorly designed release or deployment will, at best, force IT personnel to spend significant amounts of time troubleshooting problems and managing complexity. At worst, it can cripple the environment and degrade live services!

Application Release Automation (ARA) is a relatively new, but rapidly maturing area of IT. As with all new areas there is plenty of confusion around what Application Release Automation really is and the best way to go about it. There are those who come at it with a very developer-centric mind-set, there are those who embrace the modern DevOps concept and even those who attempt to apply server based automation tools to the application space.  One thing is for sure, ARA is a Digital Disruptor.

Application release automation (ARA) refers to the process of packaging and deploying an application or update of an application from development, across various environments, and ultimately to production.  ARA solutions must combine the capabilities of deployment automation, environment management and modelling and release coordination.

Relationship with DevOps

ARA tools help cultivate DevOps best practices by providing a combination of automation, environment modelling and workflow management capabilities. These practices help teams deliver software rapidly, reliably and responsibly. ARA tools achieve a key DevOps goal of implementing continuous delivery with a large quantity of releases quickly.

Continuous integration and delivery have emerged as the next set of concepts following the interest in Agile and DevOps. Implemented correctly, the principles of CI can streamline and optimize the software lifecycle from start to finish.

DevOps promises much, and an effectively implemented DevOps strategy can have a powerful positive impact on business performance. Simply speeding up the movement of code from development through testing to operations will result in more errors and downtime. Putting in place a set of automated processes based on solid policies ensures that code flows as it should do. Abstracting systems of engagement away from existing systems of record provides a means of optimising continuous delivery to the business and its users of IT. For many organisations, an approach of ‘refurbishment’ and continuous improvement of the system of engagement will provide the greatest return on investment, leading to a measured migration to a new world of composite applications.

Relationship with Deployment

ARA is more than just Software deployment automation – it’s deploying applications using structured release automation techniques that allow for an increase in visibility for the whole team. It’s the combination of workload automation and release management tools as they relate to release packages and movement through different environment within your DevOps pipeline. ARA tools help you regulate your deployments, how you create and deploy environments and when and how to deploy releases.

The intersection of DevOps with IT operations is a two-way street — even as developers increasingly take on more of operations, IT Ops pros must think more like app programmers.

The technical changes that come with establishing a DevOps culture affect the IT infrastructure even if separate IT operations teams still manage day-to-day matters. New application development practices such as containerization, microservices and release automation, as well as new infrastructure management techniques that require programming skills, mean IT Ops pros must learn new tricks to keep that infrastructure running smoothly.

As DevOps evolves, greater collaboration between Devs and IT Ops will be the order of the day.

Continuous Delivery / Continuous Deployment

If the definition of Continuous Delivery is to make production updates available for production deployment, then Continuous Delivery stops at production’s door. Continuous Deployment is the next step, where approved updates are not just made available for production deployments, they can be automatically deployed into production.

Using Application Release Automation (ARA) means no longer needing to maintain custom scripts for deployment, configuration and provisioning. Users move beyond islands of automation and separate practices for PROD and non-PROD. ARA enjoys the benefits of having release coordination combined with deployment automation across the Continuous Delivery pipeline, including fully compliant PROD deployments across the entire IT and ITSM stack.

Production-proven Release Automation ensures predictability and compliance for both production and lower environments deployments.

Who are the Innovators in the ARA tool arena?

A disruptive innovation is an innovation that disrupts an existing market. The term is used in business and technology literature to describe innovations that improve a product or service in ways that the market does not expect, typically by lowering price or designing for a different set of consumers.  Some of the innovators include:

Solution

Released By

Automic Release Automation

Automic

CA Release Automation

CA Technologies

Application Modeller

DynamicIQ

Codar

HP Software Division

Serena Deployment Automation

Serena Software

UrbanCode Deploy & UrbanCode Release

IBM

XL Deploy & XL Release

XebiaLabs

ARA addresses the identified business challenges by:

Reducing the opportunity for human error, thereby driving down Operational Risk.

Accelerating change, thus minimising Delivery Delays.

Reducing the time and effort required to deliver application technology change, thereby eliminating Resource Drain.

Application Modeller ARA Integrated Configuration Environment

As Thierry Hue of DynamicIQ puts it “The idea of the Integrated Configuration Environment is to make a parallel with the IDE (Integrated Development Environment) where you should manage your configuration and environment like you manage your code and binaries.”

Automic Release Automation ARA deployment process


Gartner Market Guide for Application Release Automation Solutions

20 July 2015 | ID: G00275085

Growing demand for faster and even continuous delivery of new and better applications is driving investment in ARA solutions that reduce friction across the entire application development life cycle. Prioritizing automation, environment management and release coordination needs is imperative.

Recommendations

  • Prioritize automation, environment management and release coordination capabilities based on an assessment of your current and planned deployment processes.
  • Evaluate solutions from smaller, independent vendors as well as those from large IT operations management (ITOM) software vendors to compare best-of-breed innovation with broader DevOps tool portfolios.
  • Require time-to-value requirements of three months or less in vendor evaluation.
  • Require use-case-oriented proofs of concept (POCs) and active customer references.

Sogeti, ARA and Digital Transformation

Sogeti’s Digital Transformation Services will shorten the time it takes to reduce your Digital Debt and create the foundation for the New Style of IT.  Thereby enabling you to implement Application Release Automation in a greatly reduced time frame.

We recommend that you assess your application life cycle management maturity — specifically around your deployment processes — and seek tools that can help automate the implementation of these processes as they are now, and as you intend them to be across multiple development and operations teams and platforms. Helpful next steps include:

  • Establish requirements for applications to narrow the scope of evaluation targets and to determine whether one tool or multiple tools will be required.
  • Prioritize integrations with existing development and ITOM tooling (especially cloud infrastructure or cloud management platform tools) in product evaluation criteria, with an eye toward using these tools in your broader provisioning and configuration environment.
    • Organizations that want to extend the application life cycle beyond development to production environments using a consistent application model should evaluate development tools with ARA features, or ARA point solutions that provide out-of-the-box integration with development tools.

If you are interested to discover how these ideas can become a reality for your business, you can download Design to Disrupt: Mastering Digital Disruption with DevOps here or give us a call on 0330 588 8200.

 

Robert Boone AUTHOR:
A hands-on, pragmatic ISEB certified Test Automation Manager with over ten years of automated test tool and automation development/testing experience.

Posted in: Agile, API, Application Lifecycle Management, Automation Testing, Cloud, communication, Developers, DevOps, Digital, Digital strategy, Human Interaction Testing, Infrastructure, Innovation, integration tests, IT strategy, Managed Testing, Open Innovation, Quality Assurance, Requirements, Research, Software Development, SogetiLabs, Test Automation, Test Methodologies, Test Tools      
Comments: 0
Tags: , , , , , , , , , , , , , , , , , , , , , ,

 

Situation

While on an engagement with one of Sogeti’s clients I spotted an opportunity to introduce a service strategy based on TMap in order to provide a Testing Service.  Its implementation would have to follow the ITIL Life-cycle stream.  This would in effect be an on-premises TPaaS.  A Testing Service would need to be treated and developed as an ITIL Service and viewed by the business as an ITIL Service to ensure best-practice.

At the time I was involved with testing customer facing applications involved in a Data Center relocation.  I brought QC 11 off the shelf, where it had been collecting dust for the previous 3 years. First, l needed a Service strategy.  I intended to store all of the test assets in Quality Center.

.

Complication

The client’s test artifacts were all over the Estate on a variety of platforms, none of which with a particular test focus.  These included Jira for test requirements and defects, Auspex for test plans, project documentation, and test results.  Test Cases for the most part were held in Excel spreadsheets.  This situation was going to make life more difficult for me managing five separate projects at the same time.  Tracking project/test documentation in such disparate repositories would have put the test schedule at significant risk.

Often there were long searches for test and/or project assets needed for testing a project.  This was primarily due to the lack of a single repositories and inadequate search facilities.

Testing was slipping its schedule.  Issues arising from the current situation revolved around a lack in confidence in the test management of the Data Center migration.

This situation gave me the opportunity to re-commission Quality Center, using ITIL Best-Practices, in a PoC to demonstrate to the business, the advantages the tool could bring to it.

.

Solution

So, what could Sogeti do to make a difference?  Sogeti’s Testing Services include Test Management and Automation offerings.  The solution I had in mind combined the two, through tooling supplied through an alliance partner, HP.  HP ALM would need to be treated as a Service and viewed by the business as a Service offering.

.

The benefits of structured testing

Sogeti’s world-leading structured Test Management Approach – TMap®, I saw as the way to help the client deliver more complex, high quality software, faster so saving my client both time and money.  TMap provides a complete tool box for setting up and executing tests, including detailed and logical instruction to testers.

TMap is a proven method of structured testing, based on Sogeti research and user experience. It provides a complete and consistent, yet flexible approach, which is suitable for a wide variety of organisations and industries. Therefore TMap has been selected as the standard test approach by many leading companies and institutes in Europe and the US.

.

Outcome

The TMap structured test approach provided the client with the following advantages:

—     comprehensive insight into the risks associated with software quality

—     transparent test process that is manageable in terms of time, cost and quality

—     early warnings when product quality is insufficient and defects can occur

—     shorter testing period in the total software development lifecycle

—     re-use of test process deliverables (such as test scripts and test cases)

—     Consistency and standardization; everyone involved speaks the same test language.

.

TMap and ITIL

ITIL is the most widely recognized framework for ITSM in the world. In the 20 years since it was created, ITIL has evolved and changed its breadth and depth as technologies and business practices have developed. ISO/IEC 20000 provides a formal and universal standard for organizations seeking to have their service management capabilities audited and certified. While ISO/IEC 20000 is a standard to be achieved and maintained, ITIL offers a body of knowledge useful for achieving the standard.

TMap with ITIL Brother-in-Arms.

ITIL Life-cycle stream

ITIL provides guidance to service providers on the provision of quality IT services, and on the processes, functions and other capabilities needed to support them.

TMap provided the ALM strategy to form the ITIL Service Strategy.  The lifecycle starts with a service strategy.  An understanding who the consumers of the service offering are, the IT capabilities and resources that are required to develop the offering and the requirements for executing them successfully.

My role with the client started with the Service Strategy and flowed through to Continual Service Improvement.

I worked closely with the client in service design to ensure Quality Centre was designed effectively to meet my client’s expectations and support the Test Strategy.

In service transition I built, tested and moved into production the solution, which enabled the business client to achieve the desired value.

.

Bottom Line

TMap and the ITIL Life-cycle stream helped me achieve what had seemed impossible for my client: better service, at lower costs.

.

Information and technology (IT) governance

ITIL is a recognised best practice for implementing IT Governance.  It further defines how we implement IT Service Management and where Testing sits in the model

Robert Boone AUTHOR:
A hands-on, pragmatic ISEB certified Test Automation Manager with over ten years of automated test tool and automation development/testing experience.

Posted in: Data structure, Human Interaction Testing, Infrastructure, Innovation, project management, Quality Assurance, Research, Software Development, Software testing, Test Driven Development, Testing and innovation, TMap, TMap HD, TPaaS, Transformation      
Comments: 0
Tags: , , , , , , , , , ,

 

Last week some Sogeti colleagues and I attended Microsoft’s ‘ALM with Visual Studio and Team Foundation Server 2013’ one-day event in London.

Visual Studio and Team Foundation Server provide an Application Lifecycle Management solution that covers the range of software development activities from requirements capture through to development and onto testing and release into production.

The event was a walk through these areas using Visual Studio 2013 and Team Foundation Server 2013, and covered most new features, as well as the existing capabilities that these build upon.

The event was hosted by Richard Willmott (Developer and Platform Evangelist) with presentations by Giles Davis (Developer Tools and Technical Specialist), both from Microsoft. The sessions were mainly demo driven, and during the afternoon we were shown demonstrations in the following areas: Testing Tools functionality, including Test Case and Defect Management; Manual Testing; Load and Performance Testing; Bug Tracking; Exploratory Testing; Automated Functional Testing with Coded UI; Lab Management; and Reporting.

Other topics covered during the day include:

– An update on the Visual Studio family of development tools, including Visual Studio Online
– Requirements capture, agile planning and portfolio management
– Development including version control choices (centralised vs. git), code quality and automated builds
– Deployment with Visual Studio Release Management
– Collaboration tools including Team Rooms and stakeholder feedback
– Dev/Ops integration with IntelliTrace, System Center and Application Insights
– Taking advantage of the cloud in Dev and Test environments

This was one event where the refreshments and lunch were not the best parts (although they were still great).  Au contraire, the presenter was excellent in terms of product knowledge and presentation skills.  Dare I say the event was actually entertaining?  Aside from the standard functionality of the products, a lot of which hasn’t changed since VS2010, I learnt about the following points in more depth:

– TFS has strong version control, with an ALM element, and also supports GitHuba web-based hosting service for software development projects that use the Git revision control system.
– Cloud version of TFS is known as ‘Visual Studio Online’ – a little confusing as one would have thought TFS Online would have been more appropriate.  An MSDN subscription is required for more than 5 users.
– With regards to integration of the Kanban Agile approach to testing, there is no Process Template for Kanban at this time.
– The platform offers integrations for Eclipse, JIRA, MS Project, as free Add-Ins.
– It’s possible to have unlimited VU’s for Load testing as long as you have Visual Studio Ultimate
– There’s an option to have customisable Process Templates, and the user can also customise Burndown charts through Reporting
– In terms of Release Management, Workflows can be added using PowerShell
– Team Rooms can be used for enhanced collaboration, such as requesting feedback
– Excel can easily be used for bulk update to TFS, which is useful in migrations

Whilst I enjoyed the whole event I have to say my favourite part of the day – and the section I also found most useful – was the afternoon, when the demos focused on Testing specifically.  I picked up tips and learnt a lot.

I have, in the not-too-distant past, successfully introduced Microsoft Visual Studio Testing Tools to one of Sogeti’s clients.  This introduction went well, and the client followed up at first with a Pilot of the tools, before subsequently adopting them fully.

At the moment I’m preparing to introduce a couple more Sogeti clients to Microsoft ALM, so found the sessions very relevant.  Although it was my first time visiting such an event, I’m ensuring it won’t be my last, and I look forward to learning more about a wide array of tools and techniques! I would encourage all other testers to do the same in order to stay clued up about current trends and best practices.

Here’s just a few links that readers may find useful to learn more about Visual Studio, TFS and ALM:

Robert Boone AUTHOR:
A hands-on, pragmatic ISEB certified Test Automation Manager with over ten years of automated test tool and automation development/testing experience.

Posted in: A testers viewpoint, Application Lifecycle Management, Events, Microsoft, Software testing, Test Tools, Visual Studio      
Comments: 0
Tags: , , , , , , , , , , , , , , , , , , ,

 

I was recently asked to go into a large financial institution to do the technical project management of an upgrade of TestDirector 8 to Quality Center 11.52. It involved hundreds of projects and hundreds of users. As most readers will know, this equals a massive upgrade spanning several version releases; multiple quantal leaps in effect. Here’s how we approached this:

Naturally, the upgrade couldn’t be carried out in one day on a weekend using the same machine that had hosted TD8. Wherein, we would have put a disc into the TD8 server, clicked the ‘Upgrade’ button and a few hours later TD8 would have magically become ALM11.52. Unfortunately, it was much more complex than a desktop application upgrade.

There had been a few previous attempts to do the upgrade before, internally. But, during all of these attempts, the client had encountered issues. This is why they decided it was time to call in the experts from Sogeti.

What was called for, primarily, was emotionally intelligent leadership. Aside from the normal stakeholder management, I needed to work closely with the Architects and contacts within HP to come up with the technical specification of the staging environment in which we would upgrade the large number of TD8 projects to ALM11.52. This is what we decided on:

It was decided to go from QC9 to QC10 instead of QC11, in order to delay the migration of the repository to the new SMART Repository in ALM11.52. Twelve new servers had to be built – seven of them VM’s, all of them re-deployable.

The next thing I needed to do was to get high level estimates from the Server Team for all of the components. Bearing in mind I was implementing a Disaster Recover strategy too. I then went to the Project Management Office (PMO) to assist me with the creation of a Project Initiation Document (P.I.D.) and project governance.

A work track was then created to monitor the effort put into the projects by the various areas of the business, along with infrastructure costs. These costs were being treated as Capex, coming out of the small budget the Project Sponsor had for the upgrade project. The Consultancy costs, which included my time on the project, had to come out of Opex, to ensure there weren’t overruns on the budget. These costs were raised through PO’s and were recorded in the Work track, as ‘Fixed Costs not Billable’ to the project.

I worked with Architects, Database Analysts and IS Delivery to ensure the staging environment was delivered. This environment included seven new virtual servers and five new physical servers, which took six months to procure and implement. Full DR capability was also achieved in this time for pre-production and production environments.

The industrialisation of the upgrade/migration process took nine months from beginning to end. When we finished it was robust, resilient and efficient, with fail-over capability. Active Directory (LDAP) Authentication capability was also implemented, although the client wished to stay with Quality Center authentication for the time being. Many TestDirector 8 projects and Users were upgraded to ALM11.52, deeming the project a success.

This success was only possible because of good teamwork and empathy for those on the team, both in-house and supplier-side. Empathy was more important than IQ on this occasion, as in most.

To find out more about Sogeti’s Software Testing consultancy services, including tool selection and implementation, please visit our website.

Robert Boone AUTHOR:
A hands-on, pragmatic ISEB certified Test Automation Manager with over ten years of automated test tool and automation development/testing experience.

Posted in: HP, Infrastructure, project management, Software testing, Working in software testing      
Comments: 0
Tags: , , , , , , , , , , , , , , , ,