We know that cloud computing is “the new normal” just like virtualization was in the past. And we also know that the adoption of cloud computing by your organization can come with a series of benefits including:

  1. Reduced IT costs: You can reduce both CAPEX and OPEX when moving to the cloud.
  2. Scalability: In this fast changing world it is important to be able to scale up or down your solutions depending on the situation and your needs without having to purchase or install hardware or upgrades all by yourself.
  3. Business continuity: when you store data in the cloud, you ensure it is backed-up and protected which in turn helps with your continuity plan cause in the event of a crisis you’ll be able to minimize any downtime and loss of productivity.
  4. Collaboration: Cloud services allow you share files and communicate with employees and third-parties in this highly globalized world and in a timely manner.
  5. Flexibility: Cloud computing allows employees to be more flexible in their work practices cause it’s simpler to access data from home or virtually any place with an internet connection.
  6. Automatic updates: When consuming SaaS you’ll be using the latest version of the product avoiding the pain and expensive costs associated with software or hardware upgrades.

But once you ask yourself: what can possibly go wrong? You open your eyes to a “cloudy weather” where you must plan, identify, analyze, manage and control the risks associated with moving your data and operations to the cloud.

To help you with the identification process, here is a list of risks that your organization can face once you start or continue the transition to the cloud:

  1. Privacy agreement and service level agreement: You must understand the responsibilities of your cloud provider, as well as your own obligations. In some situations, is your obligation to do configure correctly the service in order to enable the best SLA possible.
  2. Regulatory compliance: Remember that although your data is residing on a provider’s cloud, you are still accountable to your customers for any security and integrity issues that may affect your data and therefore you must know the standards and procedures your provider has in place to help you mitigate your risk.
  3. Location of data: Know the location of your data and which privacy and security laws will apply to it cause it’s possible that your organization’s rights may get marginalized.
  4. Data privacy and security: Once you host confidential data in the cloud you are transferring a considerable amount of your control over data security to the provider. Ask who has access to your sensitive data and what physical and logical controls does the provider use to protect your information.
  5. Data availability and business continuity: How is your organization and the provider prepared to deal with a possible loss of internet connectivity? Weigh your tolerance level for unavailability of your data and services against the uptime SLA.
  6. Data loss and recovery: In a disaster scenario, how is your provider going to recover your data and how long will it take? Be sure to know your cloud provider’s disaster recovery capabilities and if and how they have been tested.
  7. Record retention requirements: If your business is subject to record retention requirements, how well is the cloud provider prepared to suite your needs?
  8. Environmental security: Cloud computing data centers are environments with a huge concentration of computing power, data, and users, which in turn creates a greater attack surface for bots, malware, brute force attacks, etc. Ask: how well prepared is the provider to protect your assets through access controls, vulnerability assessment, and patch and configuration management controls?
  9. Provider lockdown: What is your exit strategy in case your provider can no longer meet your requirements? Can you move your data and operations to another provider’s cloud? Are there technical issues associated with such a change?

Remember we are talking about your data and business here and once you transition to the cloud you are still accountable and responsible for what happens with it. And yes, moving to the cloud comes with a series of benefits and rewards if the associated risks are identified and well managed.



Posted in: Cloud, Data structure, Digital strategy, Innovation, privacy, Quality Assurance, Research, Security, Software Development, Technical Testing, Virtualisation      
Comments: 0
Tags: , , , , ,


Details of medical laboratory, scientist hands using microscope for chemistry test samples

Big data / NoSQL Cassandra / SOLR / Natural Language Processing – Text Mining for Pre-screeening of Cancer Clinical Trials.

Cancer clinical trials are search studies that test the pertinence of a new medical treatment on cancer patients. They are key factors for medical improvement and their success depends essentially on the number of enrollments onto trials.

Pre-screening patients manually require lengthy investigations and successive matching on patients’ records during a limited phase.

Adding to this, a large amount of money spent for this phase, automating the eligibility prescreening process turns out a promising and a beneficial solution for cancer treatment.

In fact, automating this process remains an information retrieval task. Medical records, which are mainly originated from surgical pathology laboratory, constitute a rich source of unstructured data. They are written in a natural/human language which is complex and difficult for a machine to process. Dealing with such type of data requires a structuring phase for extracting useful information in order to provide the necessary knowledge to the machine, thus for translating the human language to a machine recognizable language.

Text Mining and Natural Language Processing (NLP) combined together, constitute a solid solution for representing this valuable information stored on medical records. They deal both with free text, and the main objective is to extract non-trivial knowledge from it. It encompasses everything from information retrieval to terminology extraction, text classification to spelling correction and sentiment analysis. NLP methods rely intensely on probability theory, statistics and machine learning field. It deals also with linguistics concepts, grammatical structure and the lexicon of words.

Recently, cancer research is benefiting from the Text Mining advancement and uses its theory for clinical decisions. More precisely, automating cancer clinical matching trials have been the subject for many studies and solutions which dealt with information retrieval from medical records. In fact, working with cancer data consists of covering hundreds of cancer diseases with a very large lexicon. Many medical terminologies have been constructed in order to regroup medical concepts and thus to provide a unified lexicon in the field of medical. Those libraries, mainly UMLS, SNOMED and CIMO, are a major component of Natural Language systems designed for medical field. They serve as a link between patient data and the Text Mining system in order to enrich clinical records and extract synonyms for medical concepts.


To get a clear view of how Text Mining and NLP can help the automation of clinical trials, let’s get in deep of the most used methods for processing natural language stored in clinical data. Firstly, we recall that the objective is to extract medical concepts and semantic types from both the clinical trial criteria datasets and patient data. NLP provides a semantic representation of the natural language sentences in order to map them to their original meaning. It uses either a rule-based algorithms or the machine learning paradigm for more complex language processing.

Most of the automated patients prescreening systems are rule-based. They are easy, fast and more preferred to deploy. Such methods perform well on simple types of information, but for complex type of data ML algorithms, although being a Black Box for clinicians, are more robust and give good performance. Rule-based models are mainly used for medical text pre-processing: tokenization, sentence parsing, redundancy removal, etc. After pre-processing the free text, an assertion detection phase is followed in order to detect negation. NLP system tries also to detect medical terms using different medical terminologies. The other approach is to use Machine Learning models for the same purpose through the analysis of a set of documents or individual sentences that have been hand annotated with the correct values to be learned. Main ML algorithms that are used for NLP are Naïve Bayes, Support Vector Machine and Random Forest… They take as an input a large set of features induced from patient’s records and try to learn rules from the annotated examples. ML methodology can also be used for learning information from the previously selected patients’ data by detecting features that explain the enrollments into previous clinical trials.

After retrieving all useful information from the unstructured data and expanding it with all possible medical hyponyms from medical ontologies, it serves as an information retrieval data source for matching patients with inclusion and exclusion clinical trials criteria. Given a cancer clinical trial and the encounter patients, the Text Mining system supplies to clinicians a restricted list of eligible patients, thus providing a significant impact in reduction of time and effort for manual pre-screening.


Due to the large volume of data to be managed, we have selected and designed a big data architecture based on Datastax. Why Datastax? Because it supports Hadoop, Spark, Cassandra and SOLR. Already ready to use. So, deploying it using MS AZURE portal, it took around 1 hour to get several nodes working and ready to use.

We imported all the data into a Cassandra database, then SOLR indexed it and we were able to perform some data exploration and search quickly.

We add synonyms coming from SNOMED and UMLS in order to be able to use synonyms search feature of SOLR. Thanks to dedicated NLP developments in PYTHON we implemented Natural Language Processing features (negation, semantic improvements, medical terms identification, stemming, etc) in order to improve prescreening process performance.


By the end of 2016, we will complete the test phase and we will add some improvements taking in account users feedback.

A new post will be published then, with the final conclusions and results.


Taking benefits of all scientific articles we were able to design a cancer clinical trials prescreening solution in French. Several products exist in English but no solution is available for France or French-speaking countries.

Business benefits offers by our solution is already obvious: by suggesting a list of patients in a few minutes to clinical trials team instead of several days of manual screening, the team can focus to confirm results proposed instead of screening tons of patients records and data.

Contributor: Bilal AZENNOUD, Data Scientist, SOGETI France


Posted in: Automation Testing, Big data, Biology, Data structure, Innovation, Quality Assurance, Requirements, Research, Socio-technical systems, Software Development, Testing and innovation      
Comments: 0
Tags: , , , , , ,



Often I hear colleagues or co-workers talk about Agile transitions and the impediments that certain scrum teams encounter. For instance, the team struggles with how to plan and execute the non-functional testing for the next sprint in the retrospective session (as part of the sprint review). Later in that same session, the team has a discussion how to cope with the fact that the business keeps changing the scope in the middle of the sprint. ‘How can we keep the scope fixed’ they wonder.

What intrigues me is that some of these colleagues/co-workers have a very theoretical view and approach these challenges almost as if they are out of the mud for too long. They write great literature and are experienced lecturers but when was the last time you were actually part of a scrum team (as an Agile coach at the minimum level of involvement). These colleagues have not been part of the vanguard of an agile, scrum or DevOps team at all. How committed does that make you to the actual engineers in the field, do you know their daily struggle, their real impediments as outlined in the first paragraph? Can you still identify with them, based on recent experience rather than theory?

This got me thinking because a lot of people nowadays seem to be adapting (or trying very hard to adapt) to a more healthy lifestyle. They feel they are too caught up in work and thus have lost the work – life balance, the sense of self in a way. Caught up in biological food, calorie- and step counting on different kinds of smart devices that are all part of the internet of things, and off course lot’s of fitness and/or running. In a way, they go back to basic to “find themselves” again and rediscover or redefine what that entails for them. Some of them are even strongman runners or mud masters, but are they also (or still) a mud master on a professional level?

Why not organize a mud master contest for IT professionals/colleagues? After all, a mud bath revitalizes the skin and kills dead cells, refreshing at times don’t you think? Going from books and theories to the actual playground makes one realize what that is and what that means once again. It refreshes the memory and revitalizes the view about things, you can do a firm reality check too. Look objectively at what has changed and how that impacts your approach in practice in the future. Change your reality and theory accordingly is my advice.Part of the, let’s call it “IT-mud master challenge” would be an annual mud bath of at least a month. The winner will be the person with the most hands-on experience kaizen, continuous improvement that is.


Becoming or staying a mud master can definitely aid in refreshing your approach towards different things.
This brings me to the conclusion that if you want to realize a change, it may actually require you to get your hands dirty.
So I invite everybody to keep muddling! Metaphorically speaking that is.


Posted in: DevOps, Human Behaviour, Human Interaction Testing, Internet of Things, Quality Assurance, Scrum      
Comments: 0
Tags: , , , ,


In an environment where the winner in any market is most often also its digital master, the question of whether your company will turn into a digital predator or prey, finds its answer in your testing abilities.

Come to TestExpo™ on 12 October at the Emirates Stadium; and listen to Andreas Sjostrom, an internationally awarded digital strategist and Vice President at Sogeti, as he links business trends to practical aspects of automated testing, omni channel, and cloud.

This conference will be co-located with Agile Expo, offering a great opportunity to network.

Get registered here:


Posted in: Automation Testing, Digital strategy, Research, Test Expo, Testing and innovation      
Comments: 0
Tags: , , ,


This Time it’s Personal

They say emotion has no place in business, but the culture change required for a successful DevOps transformation necessitates a re-evaluation of this old adage. DevOps is not a prescriptive set of processes and tools. It requires a paradigm shift in which previously siloed teams gain an understanding of the benefits of DevOps and of one another’s work and develop a passion that will drive the changes required to work in a more Agile, open, collaborative, incentive-driven, accountable and measurable way to drive service delivery excellence across the entire lifecycle. Organisations with a mature DevOps strategy are deploying code 30 times more frequently than their competitors and getting their code into production 200 times faster[1].  With these significant value-adds in mind, we all need to realise that before we were striving to align the goals of siloed departments to the requirements of the business, whereas now we are bringing people together collaborate, work towards common goals and change their way of thinking and working for good. Change and adaptability need to become the norm. This time it’s personal.

Cultural Characteristics

Culture is hard to pin down and describe so here is a checklist of what comprises a DevOps culture:

  1. Everyone understands why you are changing to a DevOps culture and the benefits for each team and individual including reduced time to market, faster deployment of releases, increased availability, earlier bug detection through test driven development and continuous testing, proper performance and user feedback resulting in a better product that clients love.
  2. A common understanding of the current culture in order to facilitate change more easily. For example, determining if you are a stick or carrot culture – and formulating a DevOps strategy that will eradicate the negative aspects of the existing culture and enable a more Agile and end-product focused way of working.
  3. Empathy, respect and trust between Development, Operations and Testing and across all non-technical teams of the company in regard to understanding one another’s roles, solving problems and delivering a better product.
  4. A dedicated DevOps team comprised of experienced operations people with a mix of skills and experience and the right tools and processes to facilitate continuous delivery and address pain points as individuals with a team mentality and shared goals.
  5. Clearly defined roles within the team but with a collective responsibility that negates a “blame culture”. Development isn’t rewarded for creating code and Operations isn’t blamed when the code doesn’t meet expectations in production. The whole team is accountable and rewarded when the product reaches fruition and is ready for the customer.
  6. Cross-functional team thinking in a DevOps way and cross-skilling to obtain a better understanding of everyone’s roles and a more efficient way of working by arranging resources around specific projects, rather than a more traditional approach of creating a core skillset.
  7. Understanding the problems that your customers are facing and cultivating a genuine passion to combat these issues and provide a better product and customer experience.

Cultural Challenges

The biggest barrier to successful DevOps is a failure to ensure that all your people are on board from the outset and the mistaken belief that your journey is over when you are finally working in a DevOps way. Everyone needs to understand the benefits to themselves and the whole business, while change and adaptability must be fluid and continuous.

Be aware that Operations may be concerned that automation will replace their role, leaving them redundant in the new DevOps culture. They may also worry that a shorter development lifecycle means increased risk. Developers may fear that they need to be available 24/7.  A common cry of resistance may be “that’s not how we usually do it” and a common mistaken belief is often that DevOps cannot be successfully applied to legacy systems in traditional enterprises.

Combating Cultural Challenges

These myths and challenges need to be recognised, discussed and overcome to achieve DevOps success and each success should be celebrated so that everyone knows what success and high performance looks like and is able and willing to replicate it. At Sogeti we advocate the following steps:

  1. Carry out a current state evaluation and assess where you are now compared with where you want to be. Create a clear Roadmap for change to a DevOps culture.
  2. Prioritise automating those things that lend themselves most readily to successful automation and continue to do the rest manually at first, until you have perfected the easy wins. This will help people to realise that automation is not stealing their jobs but rather freeing them up for more important work.
  3. Measure success with quantifiable KPIs that align to the wider business goals and give a clear indication of when DevOps is working for you. For example you could set a goal to reduce deployment by 33% from 12 hours to 4 hours or when testing for defects increase the number of bugs detected from 30% to 60%. These metrics enable you to pinpoint bottlenecks, banish silos and continuously improve. The metrics themselves should be dynamic and subject to change depending on your results and visible to every team member.
  4. Mentor and lead the way to successful DevOps by partnering with a company who has proven expertise and genuine DevOps evangelists who can engage with your executive leaders to help them better understand and disseminate the benefits and goals and create a DevOps vision that everyone can buy into.
  5. Train up existing staff, make new hires and utilise the expertise of your DevOps partner’s employees to scale your teams up and down on a per project basis.
  6. If you want to facilitate continuous improvement and innovation from the outset, consider a fully managed DevOps service at enterprise or program level.


It’s Been Emotional

There’s no doubt that if you are on your way to DevOps success, the journey will not only be about adopting the right processes and tools, it will start with the people, and it will be emotional. This needs to be inspired with the right leadership, passion and proven experience. Whilst DevOps is not simple to achieve, when it is done right, the benefits far outweigh the risks and challenges. This blog has focussed on the cultural and leadership aspects of DevOps but of course there are many other elements required to achieve success. To discover how Sogeti can help you make the change to a successful DevOps culture, take a look at our DevOps Services guide here.


Posted in: communication, Data structure, DevOps, Digital, Digital strategy, Innovation, integration tests, IT strategy, Opinion, Quality Assurance, Rapid Application Development, Research, Software Development, Testing and innovation, Transformation      
Comments: 0
Tags: , , , , ,