Why SOA?

If you’re looking for a way of designing, developing, deploying and managing your enterprise systems in a way that closely aligns your business goals and existing technical solutions, then Service-Oriented Architecture (SOA) is more than likely the logical solution.

Your SOA strategy should be integral to your existing Enterprise architecture designed by the Board of Directors and Management to achieve the organisational and operational goals of your business.

SOA is not, however, without its challenges, given that it is usually implemented with fast-evolving Web services using tools that have yet to reach the requisite level of maturity and taking into consideration the fact that really comprehensive testing is crucial to its success.

In order to maximise the business benefits that can be derived from the complex integration of business services, Service Oriented Architecture (SOA) requires a clear business vision and a way of governing its use across the enterprise.

From a testing perspective, the very flexibility SOA offers brings challenges of its own. The complex landscape of web services, middleware services and existing legacy services can render traditional testing approaches ineffective. A SOA testing approach formed specifically for the SOA architecture it services – modular, standardised, and driving re-use wherever possible is fundamental.

SOA Testing

SOA testing is a combination of: service testing, process verification, Test Data Management (TDM), accelerated SOAP automation. It also includes enabling practices such as continuous integration testing and service virtualisation. Testing teams need to test the systems at the service provider and the client end interfaces to ensure that the systems are completely error free. Tests also need to be grouped correctly into a regression suite with a workflow based data provisioning system.

Is In-house Testing a Viable Option?

If this all sounds highly technical, it’s because it is and herein lies one of the biggest challenges in SOA testing: to get it right in house your testing team would need to learn new technologies, processes and tools and understand complex supporting work streams required by SOAs unique architectural ecology. Your existing classical tools are unlikely to, for example, be able to test non-UI components, interpret messages that flow across an ESB or handle SOA protocols such as SOAP and WS-Security. The new SOA tools will be unfamiliar and the process is highly complex and requires expert handling. If software testing is not a core competency for your business then you are unlikely to have the time, technical resources and budget to carry out the required level of testing within your organisation, so it makes sense to look into outsourcing it in order to ensure it is carried out the highest possible standard. This of course raises the question, “who should I outsource to?”

The Pitfalls of Using a Single Provider for Development and Testing

At first glance it appears to make sense to allow the vendor who is carrying out the SOA development to also implement the testing, after all you’ve already explained your business structure to them and they obviously know the product well, you’ll save time drawing up a single contracts and it’s easier for management to deal with one provider.

However seeking to save time and effort with a single contractor creates a potential conflict of interest bringing unnecessary risk to the development process that could compromise the integrity of your SOA system. A vendor who is also developing your system is either already happy that the product is of sufficiently high quality or knows that it is perhaps not the most ideally suited for your purposes but is seeking to meet a sales quota. In either situation there is a possibility that they may not carry out the requisite depth and breadth of testing required to ensure your SOA system is running at the optimum level required to meet all of your goals.

In addition in a business where QA testing is not the core competency, testing is often passed to more junior members of staff who are working their way up to being developers and these contractors simply do not have anything approaching the same knowledge and expertise as a dedicated testing company. Another consideration is that testers who work for developers are under pressure from the development team to execute the testing as fast as possible and not delay delivery and payment, which can lead to incomplete tests being carried out and mistakes being made.

The Positives of Hiring Separate Testing Experts

In addition to these negative motivators, there are also some weighty positive reasons to instruct a separate business to carry out your SOA testing. Outsourcing offers higher quality work coupled with a faster time-to-market, and the value to cost ratio is lower than carrying out the testing in house or with a single vendor, meaning that your overall ROI should be higher.

When you hire a separate software testing company you know that you are acquiring the services of a totally impartial, trained expert who has chosen testing as a career path and who is under no pressure from the development team. By dividing the development and testing you can also outsource to smaller businesses, reducing your cost and also avoiding the employee churn that often occurs in larger companies; in short, you get a more personal and bespoke service for less money.

As Forrester states in their webinar, ‘Selecting a Supplier for Outsourced Testing Services’, “Choosing an outsourcing supplier for testing is complicated by a number of factors, including the willingness to adapt existing, internal processes and activities to an outsourced model and the possible need to manage outsourced development and testing relationships separately.”

Selection Criteria for the Right Testing Company
At Sogeti we think that some of the most important selection criteria you should be looking for in a partner include:

  • SOA testing services that recognise testing from a business needs perspective
  • A SOA test team that can work in synchronicity with your development teams to ‘parallel run’ both the service and the interface testing
  • The ability to ‘virtualize’ the whole service early, so there are no nasty surprises – ironing out defects earlier and at lower cost
  • The right onshore resources to stand by your side, combined with real-time performance dashboards to provide clear progress updates
  • A proven track record that demonstrates their depth and breadth of experience in SOA testing, with domain experience in every major vertical – from banking to telecoms to fashion
  • SOA and middleware factories that can flex up or down at need with a commercial model that can also be based on outcomes, not resources.

In our clients’ experience using a specialist independent testing supplier can help you realise cost savings well in excess of 30%.

Sogeti’s Expertise

In a benchmarking study of testing services from 13 major vendors leading independent analyst Ovum has rated the combined testing practice of Capgemini and Sogeti number 1 worldwide. Ovum commented in particular on our “ample capacity”, test process expertise, customer intimacy and responsiveness. If you would like further information about SOA testing or any of our other testing services please visit our website or email your enquiry to enquiries.uk@sogeti.com.

Darren Coupland AUTHOR: Darren Coupland
Darren is the sector head of telecommunications at Sogeti.

Posted in: Business Intelligence, Enterprise Architecture, integration tests, test data management, Uncategorized, Virtualisation      
Comments: 0
Tags: , , , , , , , ,


red-pyramidWorking on a large transformation project for a client in the Banking sector, I recently had the opportunity to think about how to build an innovation stream within the project. This project was fully oriented on IT infrastructure and the main challenge for this customer was to push innovation for IT in order to improve the IS infrastructure.

Preparing a dedicated session with Pierre Hessler on this topic, I would like to emphasize two very relevant points that Pierre stressed during the session.

Point 1: Digital changes the position of IT. The drawing below tries to explain this evolution:


Step 1: Apply before ERP Era


Step 2: Co-Create (ERP Era)


Step 3: Inspire (Digital Era)

As we can see, the position of IT is really very challenging in the new world and a kind of Schizophrenic role is rising up i.e. be upstream in order to provide the organisation with all technology benefits, be downstream to be able to implement the technologies in a very short duration due to the time to market constraint.  Pierre Hessler called this new challenge “Double or Quit” due to other internal and parallel competitors working on Digital (Chief Digital Officer, Chief Data Officer, etc.)

Point 2: Maslow Pyramid Analogy


This second point could be linked to the first one showing the different stages that an IT department has to pass in order to become the Business Partner. This analogy has been built by the CIO of a large automotive company and has been reused and adapted for our client. The table below indicates the way that the Maslow pyramid could be crafted for an IT department:


This model could be applied and used to measure the improvement of an organisation. Does that make sense?  Feel free to engage!

To read the original post and add comments, please visit the SogetiLabs blog: IT & DIGITAL: DOUBLE OR QUIT OPTION

Related Posts:

  1. Regulation: Innovation’s Double-Edged Sword
  2. Be a Digital Leader with Omnichannel Services
  3. Innovation is not an option
  4. Digital convergence requires transformation and omnichannel solutions


Jacques Mezhrahid AUTHOR: Jacques Mezhrahid
For 25 years, Jacques has been working in the software services industry and was involved in various and innovative projects for different market sectors. In charge of Digital Convergence Offering for Sogeti France since 2012 combining mobility, collaboration, architecture and Big data, he and his team are developing solutions, skills and expertises within the company.

Posted in: Developers, Human Interaction Testing, Human Resources, Innovation      
Comments: 0
Tags: , , , , , ,


Machines Fit for Mensa

Allegedly a buzz word that was accidentally coined by co-founder of the MIT Auto-ID Center, Kevin Aston, at a Proctor & Gamble presentation in 1999, the ‘Internet of Things’ (IoT) is creating a brave new world where physical objects are being integrated into our networks and lives, usually via the cloud, to create a smarter living and working environment.

IDC expects IoT technology and related services revenue to grow to $7.3 trillion by 2017 and estimates there will be as many as  212 billion connected devices by 2020 (though Cisco gives a rather more conservative estimate of 50 billion devices). So what does this mean in practical terms?

  • For Dutch start up, Sparked, it meant attaching wireless sensors to cows so that when one of the herd is sick or pregnant the farmer is notified of their condition and whereabouts, allowing him to act accordingly and resulting in 200MB of data per cow per annum.
  • In the Health sector, Corventis has connected their mobile cardiac monitor to the IoT so you can even have a smart heart.
  • Hive from British Gas connects up a customer’s thermostat, boiler and router, allowing them to control temperature and switch heating on and off via an app or website.
  • The largest Brewery in Switzerland is the proud owner of intelligent beer kegs which automatically order a refill when the amount left inside reaches a certain level – therefore they never get empty and customers are safe in the knowledge that their favourite beer will always be on tap.

The IoT has the potential to impact every area of our business and personal lives in a big way, and transform every industry – from health to transport, entertainment to food & drink – whilst also affecting our internal business processes such as supply chain management, distribution, telepresence and document management.

When Good Machines Go Bad

So what could a typical morning look like for someone using ‘smart’ devices? Let’s imagine that your 8am breakfast meeting is postponed for 1 hour, but there’s also a huge traffic jam on the motorway that will increase your journey time by half an hour, meaning that you could have a 30 minute lie-in if only you were aware of all this information. Worry not! Because your devices are all connected by the IoT, these messages are communicated to your alarm clock, your coffee maker, your hot water and your bath, automatically giving you that lie in, saving you money on your gas bill and making sure both your bath and your morning coffee are still deliciously hot when you eventually spring out of bed. Sounds perfect right?

But wait! What would your morning look like if something went awry with one or more of these communications?  Your alarm goes off 2 hours late, you wake up to no hot water, a stone cold cup of coffee, and you’ve lost an important business deal because you missed your meeting. Also let’s face it, if the smart traffic lights are broken when you are eventually en route to work in your self-driving car built by DARPA or Google, you could be dealing with something a lot more serious than missing your caffeine fix and having to wear flippers in the bathroom! OK so that’s probably a bit over dramatic, but the point is that despite some extremely helpful use cases, a lot more can go wrong.

The Internet of Threats?

The benefits of the IoT are clear – Communication, Control through automation and Cost Savings (the 3 c’s), and the same IDC report we touched on earlier in this article suggests that the greatest IoT opportunities will be initially in the consumer, discrete manufacturing, and government sectors. But just what are the possible threats?

  • The IoT is based on an Open Source foundation and, as we saw in our earlier blog post Mutiny & the Bug Bounty, the security of OpenSSL was brought to its knees by the recent Heartbleed bug.
  • There have already been several examples of smart products being hacked. Black hat hackers have, for example, already turned Google’s Nest thermostat into a network traffic sniffer spy, and the Belkin Wemo Home Automation firmware was originally found to have 5 separate security vulnerabilities, once it was already installed in consumers’ homes.
  • Another really interesting example occurred at retail store Target when attackers stole customers’ credit card details by accessing the retailer’s point of sale devices via vulnerabilities in the store’s smart air conditioning system!

There are a number of other threat factors peculiar to devices connected by the IoT that need to be considered at the design and build stage. One is that they are not typically end-user serviceable, another is they may behave differently in areas where bandwidth is restricted or in higher latency environments, and thirdly that the relationship between machine to machine or machine to end user may be transient, for example a mobile phone that is temporarily linked to a hire car. There is also a lack of contact and connection between the engineers who create the physical smart “things” and those who install them. All of this combined with the ability to spy on and steal from devices connected by the IoT, raises serious issues of public and personal privacy and security, legality and ethics.

The Internet of Tests

Each component part of the IoT (network, application, mobile and internet) has its own security and privacy issues so it is unsurprising that, when all of these things are combined, the potential problems are vastly multiplied. So what is the best way to pre-empt these potential issues and diminish them as much as possible?

There are various ideas that could help, such as separating out the networks so that the IoT devices can’t interact with things that are on a protected network and building devices that are designed to die after a given time span. Ultimately though, the most effective way to ensure that your IoT devices do not fall foul of all of these security, privacy, legal and ethical threats, is to create an IoT-specific testing road map and vigorously test at every stage of development, including replicating the installation environment. It is clear that this new technology requires a new testing strategy.

So what are the challenges of Testing the IoT? Well, as smart devices become more prolific and widely used, the end user environment could be hot, freezing, wet, humid, at altitude, in motion or very noisy – all of which impact the effectiveness of the device. A smart phone alone can now have 20 million lines of code, so these ultra-connected devices are of course extremely complex with more room for error. Each device also uses a wider range of resources such as memory, processing power, bandwidth and battery life; all of which need to be tested. Furthermore, traditional hard and soft testing tools will need to be upgraded to cope with these greater levels of complexity. High levels of competition also mean that there are pressures regarding time to market and costs.

A successful IoT test strategy will need to include, at least:

  • A skilled test team with experience in web and embedded environments, hardware, systems and network knowledge, performance testing expertise and
  • Practical threat analysis
  • Attack Trees and Threat Models built early on, to inform decision making
  • A combined automated and manual testing strategy
  • Production hardware schematic review and verification
  • Testing manufacturer’s vulnerabilities
  • Base Platform, Network Traffic & Interface Security Analysis
  • Verification of functional security design and architecture requirements
  • Functionality assessment
  • Security focused code reviews
  • Backdoor identification
  • Testing and proper calibration of the device sensors (usually with a defined stimulus signal)
  • Business integration testing
  • Rapid agile testing and reporting

Internet of Rights?

Now the Internet of Things is not a new phenomenon; the majority of us use it every day on our laptops and mobile devices and we have done for a few years. It is, however, set to grow exponentially in the next 24-36 months, which poses the question – how will we maintain our privacy and security?

We know the potential social and business benefits are huge – who doesn’t want to improve airport flow or use cheaper wireless technology in the cloud at work? And yes, as with all technology and indeed most things in life, there are risks and threats; but with the right approach to testing much of this can be eliminated so that the benefits far outweigh the potential issues.

So, IoT: good or bad, or simply here to stay regardless? Well, before making up your mind you may want to consider a few words of warning by Gartner Fellow, Steve Prentice, in a recent interview with ComputerWeekly, “Smart machines are close to outsmarting humans – whether driving a car or determining a medical diagnosis – leaving the human overseer with the responsibility but reduced capability. But, if we take the major step of changing our legal systems to give machines the responsibility for their own actions, can they also expect rights?”


AUTHOR: Sogeti UK Marketing team

Posted in: Cloud, communication, Innovation, Internet of Things, IT strategy, mobile testing, Technology Outlook      
Comments: 0
Tags: , , , , , , , , ,


DetectiveTesters are detectives. Often I spend a lot of time investigating how a certain software programme that needs adaptation can be tested. Some issues are:

- The system is not documented well and people don’t seem to be sure how the system is built-up and where the changes are needed exactly.

- The system is a giant black box of which the output does not always tell us much about whether the internal behaviour is as required.

- The software interacts with other systems, but it is a mystery whether that interaction can happen in the same way in the test environment or what is needed for it to work, theoretically.

In those cases, test execution often turns out to be hard, error-prone, and slow – even for small programme changes.

In any project approach you take, testability of the software that is being created or amended is a decisive factor in the time and cost of testing. When the required effort is deemed too much and management decides to drop (part of) testing it brings risk and a potential loss of quality/ a lack of confidence in the product.

However, testing is not a one-time activity as software is a living thing that constantly needs to catch up with its users’ demands. Therefore testing is something that needs to be done throughout the life span of the software whenever changes are made to it – and I promise you there will be changes.

Investing in the testability of a system gives return over its entire life span and TMap® recognises this value; it lists testability as a quality characteristic of a software system, alongside all-time classics like functionality, security, and performance.

There are a number of measures that can be taken to improve the testability of a system:

Improve transparency about what the system does and what it is supposed to do:

  • Keep updated system documentation
  • Write readable, well-structured code
  • Have a solid test basis (clear software requirements)

Use technologies and write code by which parts of the system can be tested in isolation:

  • Maintain a proper separation of concerns
  • Allow interim results of the system to be made visible, assessed, and even manipulated

Set up a full-fledged test environment:

  • Find the right balance between representativeness and flexibility (e.g. adjustable system date for purposes of time travel)
  • Opt for technologies in which tests can easily be automated

These measures may not be the first thing on the mind of any project manager, but they sure make the work of both developers and testers a lot easier and more effective; especially in the longer term – enabling to build in better quality while generally reducing the software’s total cost of ownership at the same time.

To read the original post and add comments, please visit the SogetiLabs blog: TESTABILITY MATTERS

Related Posts:

  1. Is it a good idea solving test environment problems “later”?
  2. Playing the blame game
  3. Create a safety net, not a hammock!
  4. Use Scrum and halve the test department!



AUTHOR: Wannes Lauwers
Wannes Lauwers has been working with Sogeti since 2012 as a versatile software testing professional who manages to bridge the gap between business and IT in a variety of industries thanks to his broad interests and educational background in business administration. With his process mindset, Wannes is committed to deliver long term solutions well aligned with business needs, thus guiding companies’ convergence of business and IT into true business technology.

Posted in: A testers viewpoint, Developers, Managed Testing Services, Opinion, Requirements, Software testing, Test Environment Management, TMap, Usability Testing, Working in software testing      
Comments: 0
Tags: , , , , , , , , , , , ,


iRobotDuring our childhood days we all dreamt about how life would be in the future, truly inspired by the wonderful movies and animated cartoons such as The Jetsons, Terminator, and the like. How many of us imagined having robots as our best friends? Or becoming heroes of the Earth? Or defending the human race from the humanoid attacks? Did we even imagine that we would actually witness some of the fiction characters in reality though?

Meet Jibo, the first family robot.

With more than $2m raised through crowd funding, this is certainly not far from being a reality; Jibo is a family robot and is designed to perform a few essential tasks specifically aimed at busy families.

As per the creator Cynthia Breazeal, Jibo is an alternative to smart phones and tablets with apps like Siri. It orients toward people, recognises voices, and provides personalised interaction. Some of its features include telling children stories or taking messages for specific family members.


Another type of robot is being made for medical and research purposes. Romeo is a humanoid created to help with assistance for the elderly and those who have lost autonomy.  It can open doors, climb stairs, and even grab objects on a table in order to be of help.

Yet another type of robot is the industrial robot, Tesla, which has been a pioneer in innovative manufacturing techniques. The Tesla factory uses robots of various sizes to put together a car in record time. With the combination of humans and robots, the Tesla factory is a model to be studied and understood.

Looking at the ever-increasing number of robots like Jibo and Romeo, it is the right time to look into creating standards and protocols to ensure that everything is under control. With the mélange of Artificial Intelligence (AI), robotics and EQ, the new age robots have slowly moved into our houses and workplace, performing the duties that we used to do on our own. Seminars and TED Talks show how the world can progress with these new robots and humanoids. Many of us would love to have a Wall-e or Rosie kind of robot at home. But would you be able to handle a robot which has gone out of control or a humanoid gone bad?

The International Organisation for Standardisation has brought in a few standards for different categories of robots: industrial robots, social robots, etc. For example, ISO 13482:2014 focuses on the safety requirements for personal care robots in non-medical applications. The hazards associated with robots have been identified and security measures are provided.

Ethical aspects of using robots in our lives are also gaining significant importance. Many questions have already been raised by various groups such as the Danish Council of Ethics, some of which are:

  • Will robots change the way we view human interactions?
  • In the medical field, is the use of robots a step forward or backward?
  • As robots do not actually have emotions, will the use of these in helping patients with dementia be a positive step towards their recovery or is this plain deception?

In a previous Sogeti labs blog post, Daniël Maslyn mentioned the importance of testing and expanding our robotics domain knowledge to ensure that these robots do not go astray. After all, humans created robots and we are error prone.

Though still in its initial and exciting phase, the field of robots and robotics has already acquired a lot of fans on one side and strong critics on the other. We, as technology lovers, must not get too caught up with the merits of robotics alone.  In the mad rush for personal glory, it is also essential to treat this field with enough criticality so that it remains sustainable and a viable solution to our problems.


  1. http://www.myjibo.com/
  2. http://www.iso.org/
  3. http://www.fastcoexist.com/1682635/watch-robots-put-together-a-tesla
  4. http://www.aldebaran.com/en/robotics-company/projects
  5. Image: http://spectrum.ieee.org/automaton/robotics/home-robots/cynthia-breazeal-unveils-jibo-a-social-robot-for-the-home
  6. Image: http://spectrum.ieee.org/automaton/robotics/humanoids/france-developing-advanced-humanoid-robot-romeo

To read the original post and add comments, please visit the SogetiLabs blog: I ROBOT… AM BACK!

Related Posts:

  1. The H+ shift of Google (Part 3/4: Robotics)
  3. Finally, yesterday’s future is here!
  4. 20 jobs of the future


Jumy Mathew AUTHOR: Jumy Mathew
Jumy Mathew has been part of the Sogeti Group since 2007. In his current role, as a Java Senior developer, he is responsible for the design and implementation of the EU institutions' HR application (SYSPER2). He currently works for the DIGIT unit of EU commission. He is also involved in mentoring and coaching resources involved in his project.

Posted in: Automation Testing, Business Intelligence, Developers, Human Interaction Testing, Innovation, Open Innovation, Opinion, Technology Outlook, Transformation      
Comments: 0
Tags: , , , , , , , , ,


truthIn the first part, some strange conclusions appeared to be drawn from data. Let’s try to explain what happened.


We analysed a sequence of data (people entering a mall, measured every 7 minutes during the day for a month) that we assumed to be periodic. So we “converted” our data from time to frequency to identify the rush periods (and we “converted” it back to check how close we were).

What we forgot in the process was that our sample rate (fs=once every 7 minutes, 2.38mHz, as it is “slow” we have to use milliHertz units) didn’t permit us to “find” anything with a frequency higher than fs/2 (1.19mHz in our case, something happening more than once in 14 minutes). A key aspect in converting information from the time domain to the frequency domain is that you need at least two data points influenced by a frequency in you observation period to be able to measure it.

“But we didn’t do anything wrong!” one could argue as we found 45 minutes (0.37mHz) and 17 minutes (0.99mHz) which are both below fs/2 (1.19mHz). But on the other hand we didn’t “clean” the data before applying our transformation tool. And there was evidently something (the main subway, up to 10 people every 12 minutes, 1.39mHz) hiding in our data. The 45 minutes was real (it’s the train frequency), but the 17 minutes wasn’t at all. It appeared, as a ghost, in the mirror of the Nyquist frequency (fs/2) due to “folding” (like folding a paper on a line at fs/2).

This folding is the first reason to always be careful in finding frequencies in measurements of phenomenon that can have variations above half the frequency rate. One way to do this is to increase the sampling frequency, here the same mall example with a one-minute sampling rate (still showing the estimation for comparison)

higher fsFig 1: higher sampling rate  

But it’s not that easy when you don’t know how “high” in frequencies reality can go, so applying a low-pass filter is usually the way to go, remembering that low-pass filters are not “vertical” and induce other artifacts (the stronger the filter, the more the phase is altered; some residue often remains if there are strong high frequency information and they should be cancelled out by generated added noise beforehand).

In more common-life experience, when you see a wheel rotating backward in a movie, or when you find a CD from an old mastering harsh, you will now know where it comes from (aliasing for the wheel and steep low-pass filters, low to no noise added beforehand plus sometimes residue of folding).

Regression to the mean

For the strange services performance improvement (and deterioration) of services comparing the best/worst performing groups in two successive tests (as in fact we discovered we did nothing between the two campaigns to improve or worsen their performance), I’ll let Derek Muller from Veritasium give you some explanation and examples:


The main point here is to always be careful comparing an oriented subset of measures (because the reason of the selection can often impact the comparison), always check the population of measures (is it already an oriented subset?) and the effect of the measure on the data (am I measuring the tool or the phenomenon?). Another good practice is to keep an open mind and try to evaluate other hypotheses as well (because our main line of analysis is often convergent with the result we would like to observe).

Or you can use this in future career moves…


With data becoming easier to acquire, tools easier to use, quantities easier to overwhelm, use the tools you know well, keep an open mind on your hypothesis, and always ask a specialist when dealing with complex analysis and high stake results.

To read the original post and add comments, please visit the SogetiLabs blog: THIS PIECE OF DATA IS LYING! (2/2)

Related Posts:

  1. This piece of data is lying! (1/2)
  2. Three Data Scientists Share Six Insights on Big Data Analytics
  3. Secure Your Piece of the PIE: of All the Trillions that Pervasive Interaction Engine IoT Will Bring
  4. Big data and trends analysis: be aware of the human factor!



Claude Bamberger AUTHOR: Claude Bamberger
Claude Bamberger has been an information systems architect since his first job in 1994. Claude’s foray in innovation has been recognized twice by ANVAR (now OSEO), from a school project that formed into a lab and for to a collaborative Talent Management solution but also by the EEC for a custom TV channel for high speed Internet providers that has been deployed in multiple countries from Asia to Europe.

Posted in: A testers viewpoint, Big data, Business Intelligence, communication, Developers, Opinion      
Comments: 0
Tags: , , , , , , , , ,




Data analysis is fascinating as arguably with a good source of data and appropriate tools (which are both becoming more and more accessible these days), you can see more clearly, explain what is happening and even predict the future.

In this first part, we will walk through two cases where data appears to be lying.

Fist case: The mall and the subway

In a mall we see people coming and going all day long, but couldn’t we predict the crowd a little bit more to adjust our sales animation?

Let’s measure people entering the mall every 7 minutes:


Fig1-measuresFig 1: number of people entering the mall during a 6 hour extract, one blue point every 7 minutes

Based on this data (in fact based on one month of such data), and the use of the “Power Spectral Density Estimator” tool in the new version of our data analysis system, we were able to identify the frequencies at which larger groups of people come into the mall!

We have two main frequencies: 45 minutes and nearly 17 minutes that, used as a simulation, correlate quite well with the measurement.


Fig 2: same as figure one with the red curve figuring the estimation
In conclusion, knowing that the main train frequency during the day is 45 minutes is logical, but figuring out why there is a 17 minute frequency is difficult as none of the main subway schedules indicate this kind of timing.

Second case: The test

To improve our software performance, a complete “live” measurement campaign has been conducted on our services layer, establishing the most comprehensive test to date, one thousand services response time in real conditions.

Fig3-1st campaignFig 3: services performances, first measurement campaign  

The network team feel they could improve the results by prioritising the best performing ones (which are the most often used ones as the code of these services is already quite optimised) using QoS on the network.

In my team, we believe code reviews are the way forward. We think we can improve the results by reviewing the services and giving some advice to the development team and do this by taking the 100 “worst” performing services on the list and begin our work.

A month later, a new campaign is performed; we observe the same kind of measurement (globally), with a comparable mean and standard deviation.

And the results are…


Fig4-bottom 100 comparisonFig 4: impact of code reviews on 100 services  

Very good indeed! As you can see, the improvement (data moving “left”) is 10-50% in each rank and some have been improved by at least 40%.

Well, not so good results for some friends of ours…


Fig5-top 100 comparisonFig 5: impact of QoS on 100 top services

The “best performing” services are now even worse (data moving right) with some nearly doubling their result, most ranks are worse (the first two are a little bit better but it’s not worth it).

Based on the first set of data, we can conclude that the network was already very well setup and should be set back to the previous settings and we can schedule code reviews for the next 100 “worst performing” services to evaluate more closely the ROI before generalising this approach for every service used in critical applications.

But talking with our colleagues, we realised two very strange things:

  • the network team finally didn’t set up priorities as the network monitoring tools showed a very fluid network
  • the development team was swamped with a new mobile app to build and integrate and couldn’t integrate the result of our recommendations yet.

So no one did anything, but nevertheless the result changed dramatically. I can’t figure out why these results were so positive. Chance perhaps? Can you figure it out?


Those two stories were a simplified illustration of things that can go wrong (or too good) when using data.

In the next part, we will look closer at the tools we used and how it can be explained.

To read the original post and add comments, please visit the SogetiLabs blog: THIS PIECE OF DATA IS LYING! (1/2)

Related Posts:

  1. Secure Your Piece of the PIE: of All the Trillions that Pervasive Interaction Engine IoT Will Bring
  2. Three Data Scientists Share Six Insights on Big Data Analytics
  3. Make big value out of your big data… and small Data
  4. Open data: Data for free or free datas? What’s the impact?




Claude Bamberger AUTHOR: Claude Bamberger
Claude Bamberger has been an information systems architect since his first job in 1994. Claude’s foray in innovation has been recognized twice by ANVAR (now OSEO), from a school project that formed into a lab and for to a collaborative Talent Management solution but also by the EEC for a custom TV channel for high speed Internet providers that has been deployed in multiple countries from Asia to Europe.

Posted in: Big data, Business Intelligence, Developers, IT strategy, Performance testing, Test Automation, test data management      
Comments: 0
Tags: , , , , , , ,


By Carballo (http://blog.engeneral.net/) [CC-BY-SA-2.5 (http://creativecommons.org/licenses/by-sa/2.5)], via Wikimedia Commons

Now more than ever before, we are finally getting all the great new gadgets and tools that we were promised by visionaries in past years. Always ready to share my profound insights with my peers, I decided to define some areas I’m personally interested in, find the future vision that was popular decades ago, and see where we stand.

In no particular order, here are some examples of innovations that were predicted long ago that are now finally getting to a stage where we can actually start using them.


Let’s start with an old party favorite, the robot. Wouldn’t it be great if we could create machines that look like us and can do all the chores and tasks we cannot or do not want to do? This basic idea has been around for a long time and it has been one of those things everyone assumes will happen in the future, but ‘not in my lifetime’… until now!

Today you can meet with Kodomoroid® and Otonaroid®, two very lifelike androids displayed in the Miraikan museum in Japan. You can buy household robots anywhere, for doing individual chores like mowing the lawn or vacuuming the floor. Humanoid robots like ASIMO have reached a status of development where it’s easy to see them becoming commercially available.

Things get smaller!

OK, so we cannot shrink people or submarines to the size of a micrometer yet.

But there have been great achievements in nanotechnology (medicine delivery, DNA-Origami). This advancement means that by using nanotechnology, medicine can be delivered exactly where needed without interfering with the surroundings. Nanotechnology is also being used to help create stronger building materials and energy-efficient engines.


Any credible future vision used to include flying, often autonomous, vehicles taking us anywhere we want. Plus, people in shiny, skintight clothes but that’s maybe more of a questionable fashion sense and overall repression of the time.

Not flying, but definitely autonomous is the Google Self-Driving Car, and basically ready for commercial release and we can now finally sit back and relax while the car does all the heavy lifting.

Communication and knowledge sharing

Internet. Just, internet.

In 1895, Paul Otlet started the “Universal Bibliographic Repertory” an index-card based collection of knowledge items people could reference. His goal was to share information across the globe, sending information on request and setting up proxies in cities everywhere. He was still thinking in terms of horse powered delivery systems and manual filing, but the basic idea was there to create a comprehensive overview of all human knowledge and freely share the knowledge worldwide. Now the internet has made Otlet’s vision come true as we can communicate freely (well, most of us can, anyway) and share information such as our cat videos with anyone. We have more information at our fingertips than we can ever hope to read.

© Wikipedia Plus: Communicators

From Google Glass and Apple Watch, to mind-machine interfaces and finding god-particles, we are now living in an age of innovation, using technology that only yesterday was science fiction. And business technology is at the forefront of this development. Great time to be working in IT!

To read the original post and add comments, please visit the SogetiLabs blog:


Related Posts:

  1. 20 jobs of the future
  2. The Future of Us: The Dawn of Nanotechnology
  3. Signs of future, where’s the new Nils-Olof?



Gerard Duijts AUTHOR: Gerard Duijts
Gerard Duijts has been working as a business consultant at Sogeti since 2007, consistently working on innovating and improving the way Sogeti helps clients. Gerard is SharePoint lead for Sogeti NL, and is the driving force behind SSI - Sogeti Social Intranet, an internationally recognized business solution that makes it possible to offer a social intranet solution to our clients in days rather than months, incorporating many years of experiences and best practices.

Posted in: Business Intelligence, communication, Developers, Innovation, Internet of Things, Open Innovation, Opinion      
Comments: 0
Tags: , , , , , , , , , , ,


Critical Customers to Critical Testing

For the 2014-15 World Quality Report, Sogeti has collaborated once again with Capgemini and Hewlett-Packard, interviewing 1543 top CIO’s, IT Directors and testing leaders in 25 countries, to bring you the industry benchmarks and key insights that will give you the inside track when planning your  application quality and testing strategies. The pivotal takeaways from this year’s Report are that quality assurance (QA) and Testing budgets are on the rise globally with an ever increasing focus on quality; and cost optimisation, as ever a top priority, is being delivered through process and technology advancements such as cloud based solutions, as well as through organic growth. The emphasis on quality and reduced time to market , paired with the widespread adoption of Social Mobile Analytics & Cloud (SMAC) technologies and the additional pressures caused by the ever-burgeoning Internet of Things (IoT), means that a dynamic, Agile, testing and QA strategy is increasingly viewed as a business critical function with a huge 94% of organisations currently employing Agile methods. The sheer pace of these advancements has caused concern amongst 40% of IT leaders who admitted that they struggle to find the time to adequately test their mobile solutions. In the UK these combined factors have led to the share of the testing budget allocated to staffing and human resources rising from 23% in 2013 to 36% in 2014, as businesses seek to find more highly skilled testers with knowledge of vertical markets and business processes. This trend is set to remain the highest area for investment in Testing and QA in 2015 in the UK.

20 Years to Build and 5 Minutes to Ruin

Reputation is everything and as Warren Buffet famously said “It takes 20 years to build a reputation and five minutes to ruin it. If you think about that, you’ll do things differently”. Sure enough, as social media and review sites have now made real time customer engagement, feedback and peer recommendations the norm, organisations worldwide are realising the importance of a solid Online Reputation Management Strategy (ORMS). This incessantly rising customer demand for great performance and reliability with a seamless multi-channel experience has led to an increase in QA activity, with the percentage of the total IT budget spent on QA, rising from 18% in 2012 to 26% in 2014, with an expected increase to 29% by 2017.

Less Cost in the Cloud

In 2013, cloud adoption for hosting and testing applications declined in 2013 but has risen again this year with 28% of applications being hosted in the cloud as IT leaders seek the most cost effective solutions. There has been a corresponding rise in testing in the cloud from 24 % in 2013 to 32% this year with a predicted rise by 2017 to 49% globally.  In the UK,  where currently organisations are still showing a preference for private cloud infrastructure over hosting their production systems on the public cloud  the rise in cloud testing is  expected to rise to an impressive 53% by 2017.

Collaboration & Greater Expectations

One of the most interesting trends indicated by the Report is that globally, and particularly in the UK, businesses are continually expecting more value from their outsourced service provider. The focus has shifted from supplying testing resources for projects, to delivering the value-add of knowledge, expertise, tools, methods, metrics and processes to help transform the customer’s testing organisation. Organisations that used to rely on internal-only teams or temporary contractor resources are realising that their current models don’t offer optimal effectiveness and flexibility. One in five (20%) testing projects is co-managed together with a professional services provider – an increase from 16% a year ago. And less than a third (32%) of testing projects are now completed using in-house resources only. This highlights a distinct move to find new ways that can positively transform the client’s entire testing organisation.

As the largest global independent QA and Testing survey, the World Quality Report is a valuable tool to help you determine your testing strategy and justify your budgets for the next 12 months. Request your copy of the full report here and join the webinar on 16th October here.



AUTHOR: Sogeti UK Marketing team

Posted in: Behaviour Driven Development, Business Intelligence, Human Resources, Reports, Research, Transformation, World Quality Report      
Comments: 0
Tags: , , , , , , , , , , , ,