SOGETI UK BLOG

istock_000014691322small_lowres

 

DevOps is a transformational challenge for many organizations. But not all roads lead to DevOps: We cannot move forward to DevOps without a continuous Quality Assurance (QA) strategy, which in turn, pushes transformation of traditional testing and QA activities. We propose to make progress on such transformation by triggering structured changes, which we propose to be based on a Continuous QA for DevOps maturity model. We need to analyze the current situation, trace a route, and progressively implement it as an ordered set of service-oriented pieces to confirm a particular puzzle for each particular organization. Agility, automation, integration of such pieces are the key to success.

Since DevOps is about reducing the gap between development (Dev) and operations (Ops) through continuous development and continuous operations, continuous QA becomes a must in order to establish quality gates in the whole process, aimed at providing checking points, alignment and “lubrication” of role synergies.

QA activities may become a brake for DevOps transformation or an essential thrust. It depends on us. If QA is not adapted to be flexible and agile in the context of continuous iterations, then may block progress to DevOps. However, if (1) continuous QA chains are formalized and automated to rely as less as possible on manual QA activities, (2) if accelerators in the form of excellence services are introduced, (3) if Customer experience analysis provides feedback to next iterations, and (4) if reporting for integral monitoring and analysis is established,… then continuous QA becomes a key factor for reducing the Dev-Ops gap.

Continuous QA provides reliability, feedback and point controls in all directions, which are essential for improving confidence, collaboration between roles, double-checks about understandings and misunderstandings, and better knowledge management. Someone could argue that QA activities add effort to the DevOps chain, and it’s true, although many QA activities may be automated in some ways. And, finally, it is also true that we cannot avoid QA unless we are very brave to take risks that could turn into heavy consequences for quality and reputation. The questions is: Do we want to follow a path with check controls that focus us if we take inefficient or dangerous directions, or do we want to just run as fast as possible?

Therefore, in order to face QA in a DevOps transformational project, we propose to take action by:

  • Analyze the current continuous QA maturity level.
  • Trace a route towards continuous QA for DevOps.
  • Design the path to reach higher maturity goals by composing a service-oriented puzzle to move forward. Such puzzle is a sequence of services (test automation of different types, User eXperience analysis, integrated dashboards, test designs generation from user stories, test data management,…) that need to be set and integrated on top of an explicit process.
  • Progressively implement those selected services and integrate them into the chain in order to be “more DevOps”.

Transformation to DevOps implies people, processes, actions and monitoring. Being DevOps is not only about crossing a bridge and forget the previous side. The transformation to DevOps needs to follow a plan and a coordinated step-by-step progression. Just start the road, with safety checks, assisted selection of services, caution, but motion

Albert Tort AUTHOR:
Albert Tort is a Software Control & Testing specialist in Sogeti Spain. Previously, he served as a professor and researcher at the Services and Information Systems Engineering Department of the Universitat Politècnica de Catalunya-Barcelona Tech. As a member of the Information Modeling and Processing (MPI) research group, he focused his research on conceptual modeling, software engineering methodologies, OMG standards, knowledge management, requirements engineering, service science, semantic web and software quality assurance.

Posted in: Automation Testing, Behaviour Driven Development, Business Intelligence, communication, Developers, DevOps, Human Interaction Testing, IT strategy, Managed Testing, project management, Quality Assurance, Transformation      
Comments: 0
Tags: , , , , , , , , , , , , ,

 

gears

What happens in a test automation project when input test case designs to be  automated and their associated requirements are not trustworthy or they change,  evolve or become inconsistent, even when the project is already started?

One of the main consequences of such kind of situations is bewilderment. This  answer poses a challenge: how to deal with the bewilderment factor in projects?

 

The bewilderment factor is the extra effort produced as a result of one or more of the following circumstances:

  • Getting over contradictory or unclear indications, requirements, specifications or test case designs.
  • Ambiguity or missing information regarding the input elements which serve as input for the project.
  • Unclear objectives to be achieved.
  • Working in a non-clearly defined environment.
  • Unexpected/non-communicated changes in the application.

Such circumstances are especially risky in test automation projects, since automated test cases perform a non-ambiguous sequence of actions on the system under test. If such specifications are not clear, then they need to be clarified to become unambiguous. Sometimes, this effort also creates communication issues, responsibility conflicts and misalignments between people expectations. The consequence is, again, an increased bewilderment.

We promote to include the bewilderment factor in estimation techniques. It is a subjective factor, but it usually raises the effort to be done in many projects. This factor may aggregate many of the social effects that are not usually considered when estimating a project, although their consequences become tangible in terms of time, money and results. The bewilderment is especially critical when the kickoff of the project does not ensure that work environment, objectives and necessary inputs are clearly stated. Sometimes, this ambiguity cannot be totally resolved, but if we know the risk, why not taking it into account?

As a social factor, we need to evaluate bewilderment by using empirical techniques based on the consequences experienced in previous projects, evaluated on a common set of project environment variables. Particularly, in test automation projects, it is essential to evaluate if:

  • The automation scope is well-specified.
  • Input manual test cases to be automated are well-specified with an adequate level of detail. Otherwise, test cases need to be (re)designed first according to the scope and the testing criteria(coverage, risks, key functionalities, etc.)
  • The execution criteria and the reporting formats are decided. It includes aspects such as the testing environment that will be used to execute the test cases, how the results will be generated and managed, etc.
  • There exists a test environment that fits data requirements and controlled changes.
  • framework for reducing maintenance effort and code comprehensibility is defined and explicitly specified.

Social cross-wise abilities and attitudes of the project team, such as experience in programming, autonomous learning, communication abilities and management skills may contribute to mitigate this factor.

Many times, in software engineering or automation testing projects, estimations may be overtaken by reality when the bewilderment factor is not taken into account. Bewilderment is a non-countable factor at first sight, and therefore it is more difficult to be estimated and conceived as real effort from the customer point of view. However, not considering it increases the risk of failing effort estimations, since it may be a frequent cause of extra effort in many contexts.

In summary, we should take into account that people matter, and that dealing with social effects focused on people (like bewilderment) in projects planning and management may count (or discount) in the results.

 

Albert Tort AUTHOR:
Albert Tort is a Software Control & Testing specialist in Sogeti Spain. Previously, he served as a professor and researcher at the Services and Information Systems Engineering Department of the Universitat Politècnica de Catalunya-Barcelona Tech. As a member of the Information Modeling and Processing (MPI) research group, he focused his research on conceptual modeling, software engineering methodologies, OMG standards, knowledge management, requirements engineering, service science, semantic web and software quality assurance.

Posted in: Application Lifecycle Management, Automation Testing, communication, Human Interaction Testing, Innovation, IT strategy, Social Aspects, Socio-technical systems, Software Development, Software testing, Test environment, Test Environment Management, Transformation      
Comments: 0
Tags: , , , , , , , ,

 

Ttest automationest automation is a promising way to increase the efficiency of repetitive testing tasks. However, it poses few challenges in ensuring its effectiveness and expected benefits.

When thinking about test automation, the first thing to remember is that testing includes much more than just a set of repetitive tasks. Therefore, not everything can (and should) be automated. For example, key testing tasks such as the definition of test objectives or the design of a test plan are engineering tasks that constitute the base for testing, regardless of whether the tests are manually or automatically executed. On the other hand, not all test case components or test types are needed to be repeatable, while others (loading tasks, regression tests, etc.) may really maximize the benefits of automation.

The decision on whether test cases are to be or not to be automated needs to be supported by the expected Return on Investment (ROI) analysis. It’s done by considering several aspects such as the effort for the creation of automated tests, the execution time, the feedback provided and the maintenance effort according to expected changes. In other words, we cannot limit the analysis to the conception that automation is a one-shot task, because obtained test case scripts need to be maintained.

It is known that in software engineering, the maintenance effort may be significantly greater than that required for development of new functionalities. We also know that maintainability relies on the ability to change/modify the implementation, and that this ability depends on the architecture, which organises the code and makes it more or less modular, changeable, understandable and robust. This is exactly what happens in automation: Not every automation approach will lead us to the same ROI. This is why we promote it in an automation project, where we need to align the objectives and the environment characteristics (frequency of software changes, data availability and integrity, etc.) with the definition of a suitable architecture, aimed at obtaining well-organised and structured test implementations to minimise the risks (maintainability effort, data variability, requirements changes).

This discussion poses a question: Is test automation simply a ‘recording & reproducing’ process or a more complex engineering process? There exist some tools that help us record scripts, which are able to interact with graphical user interfaces and reproduce the exact recorded interactions. However, what happens if something changes? Do we again record all the scripts or do we modify, script by script, the generated code in order to adapt it to the changes? And, if we modify the code (maintenance), would it be worth to have it implemented (with the assistance of recording to generate code chunks) based on a modular, understandable and maintainable architecture? The report – The Forrester Wave: Modern Application Functional Test Automation Tools, Q2 2015 – enforces this idea and states that “Fast feedback loops frequently break UI-only test suites”. Consequently, it suggests “Ratchet down UI test automation in favor of API test automation. Modern applications require a layered and decoupled architecture approach.”

Related Posts:

  1. Test automation – Don’t try to build Rome in a day
  2. Test levels? Test types? Test Varieties!
  3. Is it a good idea solving test environment problems “later”?
  4. Automation a.k.a “fail often, fail early”


Albert Tort AUTHOR:
Albert Tort is a Software Control & Testing specialist in Sogeti Spain. Previously, he served as a professor and researcher at the Services and Information Systems Engineering Department of the Universitat Politècnica de Catalunya-Barcelona Tech. As a member of the Information Modeling and Processing (MPI) research group, he focused his research on conceptual modeling, software engineering methodologies, OMG standards, knowledge management, requirements engineering, service science, semantic web and software quality assurance.

Posted in: Automation Testing, Behaviour Driven Development, Budgets, Business Intelligence, communication, Digital strategy, test data management, Test Driven Development, Test environment      
Comments: 0
Tags: , , ,

 

bart-simpson-generatorAgile approaches follow iterative divide-and-win strategies, based on fluent communication, incremental work and continuous feedback. The Agile manifesto states that “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.” However, the principle “The most efficient and effective method of conveying information to and within a development team is face-to-face conversation” does not hold true in many projects because of time pressure, distributed projects, team changes, strict regulations,  large-sized applications, huge domain knowledge or complex organisations, etc. In some contexts, face-to-face conversations may not be enough (or not always possible), because “faces” may change, may become unavailable or may simply leave the project. Moreover, if the knowledge is spread out only across different minds, then the whole picture may be lost. In these situations, communication mechanisms need to be enriched with explicit knowledge specifications. Taking this into account, a question arises: can we use accessible, centralised and explicit repositories of knowledge as an accelerator for Agile approaches? The answer is ‘No,’ if documentation is too ambiguous, ill-maintained and bureaucratic. But, the good news is that the answer can be ‘Yes,’ if documentation is ‘alive.’

What does “alive documentation” mean? Documentation is alive if it is executable by simulation, it’s non-ambiguous and can be (easily) changed as a reaction to frequent alterations. Documentation may be a drawback for agility, if we conceive it as a set of non-updated and unstructured collection of words with ambiguous meaning, because its utility is limited and usually becomes dead. In other words, we don’t need Shakespeare for documenting system knowledge. However, agility may be supported, if we are able to manage structured, concise and easy-to-update specifications that support knowledge discussion at each iteration. If we have a useful map, we will find the right way faster. What if we were able to structure system functional knowledge in models from which we could generate purpose-driven documentation that could be easily regenerated after changes? This is not the future, this is the present (watch the Recover webinar for details).

Establishing a purpose-driven modeling strategy, for managing knowledge in Agile projects, is essential for enhancing the potential of many other principles of agility, specially about change (“welcoming changing requirements even late in the development”). It’s because models facilitate impact analysis and detection of early inconsistencies , in addition to reporting and estimation facilities.

In summary, the claim is that “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software” but we also need to add associated knowledge representations in the form of weighted models or documentation. In this way, during each iteration, the piece of working software becomes a part of the result, and this piece of documentation is integrated into the global specification view for better agility through enhanced knowledge management.

To read the original post and add comments, please visit the SogetiLabs blog: Be agile and bring documentation to life

Related Posts:

  1. Everybody tests with Agile
  2. Why Agile won’t solve maintenance problems
  3. Large, complex projects benefit most from Agile
  4. Agile series: The pitfalls

Albert Tort AUTHOR:
Albert Tort is a Software Control & Testing specialist in Sogeti Spain. Previously, he served as a professor and researcher at the Services and Information Systems Engineering Department of the Universitat Politècnica de Catalunya-Barcelona Tech. As a member of the Information Modeling and Processing (MPI) research group, he focused his research on conceptual modeling, software engineering methodologies, OMG standards, knowledge management, requirements engineering, service science, semantic web and software quality assurance.

Posted in: A testers viewpoint, Business Intelligence, communication, Digital strategy, Open Sourced, Opinion      
Comments: 0
Tags: , , , ,

 

albert.tort-pugibet@sogeti.comThe webinar on ‘The Recover Approach: Reverse Modeling and Up-To-Date Evolution of Functional Requirements in Alignment with Tests” will be held on Monday, February 2 at 5:00 PM CET. The speaker for the session is Albert Tort.

Block your calendars now for what promises to be a riveting session!

About the topic

Recover is an innovative solution (winner of the Capgemini-Sogeti Testing Innovation Awards 2014) developed by Sogeti Spain, which moves the value of Testing further than reporting defects by reusing the « know-how » acquired by testers by experimentation on a system. Recover is aimed at reducing the knowledge debt in projects by obtaining, maintaining and delivering up-to-date functional models and auto-generated technical documentation. Recover uses a structured specification of test cases as input for the test-driven generation of functional UML models, and automatically delivers the specified knowledge as navigable HTML functional technical documentation. This solution has an associated simulation tool that automatically checks the alignment between the knowledge model and the test cases (each time a test is added or modified, Recover helps updating the model and the documentation; if the model is change, the solution is able to detect the test cases that need to be refactored according to the new knowledge).

How to join the session

  1. Click on the following link: http://www.anymeeting.com/sogetilabswebinar (from your PC, Mac, iPad or Android tablet)
  2. Register with your name, location and email address
  3. Click on “use my computer” to see the content we will share
  4. If requested, allow the system to use your webcam/microphone
  5. Only presenters are able to speak at first. If you want to speak, click on the unmute button next to your name in the attendees list. You can also write live comments in the chat box.

Please note that the webinar will be recorded and the replay will be made available later.

To read the original post and add comments, please visit the SogetiLabs blog: Coming up: Webinar on ‘The Recover Approach’

Related Posts:

  1. Webinar: Uniting Robotics and IT Testing
  2. Dear testers: Yes, we model!
  3. Top 10 post: “Dear testers: Yes, we model!”
  4. Modelization of Automated Regression Testing, the ART of testing

 

Albert Tort AUTHOR:
Albert Tort is a Software Control & Testing specialist in Sogeti Spain. Previously, he served as a professor and researcher at the Services and Information Systems Engineering Department of the Universitat Politècnica de Catalunya-Barcelona Tech. As a member of the Information Modeling and Processing (MPI) research group, he focused his research on conceptual modeling, software engineering methodologies, OMG standards, knowledge management, requirements engineering, service science, semantic web and software quality assurance.

Posted in: Webinars      
Comments: 0
Tags: ,