It’s been awhile since my last blogpost, and I have missed it. To share my own and the  team’s experiences with you all is always rewarding, so I will continue my postcard series  from a while back. This blog is about working with a very energetic team of people which  makes me just buzz with energy.

Sometimes you get lucky in life and get to meet inspirational people… either on YouTube, maybe even face to face… but sometimes you actually get to work with them. Some bring ideas, others lateral thinking and yet others just positive energy… but none of that matters individually until they come together. That is when the chemistry really starts to work.

This week I had the pleasure in three different ways. First in meeting colleagues in an attempt to create a completely new “environment”.  Second meeting international colleagues which brought minds together.  And thirdly meeting long-time friends who I had not seen for many many years… and where time and distance are no barriers. In all these meetings we were able to overcome issues in one way or another; bringing life back to a project that had grown stale, sharing incredible achievements with each other not just as a point of reference but actually doing it, and transcending time (having not seen each other for 15 years).

This collective energy powers my windmill (sorry had to make the reference to the many modern  windmills I saw as I flew into Amsterdam and it being the unofficial Dutch symbol). Meeting and  working with people with such positive energy really helps tremendously in getting you to achieve  your own goals. It allows the free-flowing of ideas and where criticism is not seen as something  negative, but just a challenge to overcome. This is where new exciting things are being thought up.

Especially being able to work with people from different backgrounds and experiences helps to overcome issues faced in one country which might not occur in other countries. Sometimes you really can’t do these things on your own so I am currently working with partners in setting up some exciting new ways of working for ourselves and for our customers.

After two days in the Netherlands I am full of great new ideas that can be applied to our Sogeti Studio. Things we cannot talk about yet but all will become clear soon!

Marco Venzelaar AUTHOR:
Marco is a HPE Tooling Specialist and managing consultant for Sogeti.

Posted in: communication, Digital strategy, Human Resources, Innovation, IT strategy, project management, Quality Assurance, Research, Software Development, Test environment, Transformation, User Experience, User Interface      
Comments: 0
Tags: , , , , ,


When you execute a set of load tests, you don’t just test the desired amount of users on your system but you try to go beyond that because you want to find out what will happen when more people start using it, and you need to know the system will still  cope; thus  future-proofing your system.

Of course this is not just future proofing, it is also testing for the eventuality that an unexpected load happens, perhaps as a result of a new marketing campaign or a market changing event which may increase peak usage above expectations.  That is why we test not just at 100% load but also at 125% and even 150%. So when we are expecting 200 users on the system we will perform tests for up to 300 users.

Imagine though that you are not testing for the number of users on a particular system but the amount of flexibility in an object, such as a massive A350 at Airbus in Toulouse. That is exactly what this particular test team has been doing. A wing goes naturally up and down during flight and they are testing the tolerance to 150%. This means moving the wing 5 meters up and down from their normal position!

It is great to see testing in its physical form. In most of my load testing cases, when we break the system there is no puff of smoke or a sound from the system (sometimes not even a peep from  the project manager to say we broke his system!). These testers actually see their test subject groan under the stress and hopefully survive. I am not sure if Airbus will eventually try to do a destructive stress test to identify the limit, which is easier to do in a software system than in a physical model like this (hence the word “destructive” ). Anyway Airbus still has lots of testing to complete on its A350, before it becomes fully operational.

Another difference with the performance testing I normally do is that most of my performance test runs last anything between a day and weeks, while with Airbus they are talking about months, probably years if you also include the simulated load testing they did prior to construction and the maiden flight!

This major milestone in their test program was achieved last Friday (14-Jun-2013) when the A350 spread its wings for the first time during its maiden flight. There are some interesting snippets you can see in the videos of their test program. So if you are as fascinated as I am, you can follow their progress on their dedicated website (and even through a fantastic iPad magazine full of multimedia).

Marco Venzelaar AUTHOR:
Marco is a HPE Tooling Specialist and managing consultant for Sogeti.

Posted in: High Tech, Performance testing, Testing and innovation      
Comments: 0
Tags: , , , , , , ,


This is the everlasting question for any business when it decides to start using test tools (both functional and non-functional). I have also seen many situations where tool licenses were up for renewal and, upon seeing the cost to renew, the company wanted to do just one or two things; reduce or eliminate it. This is an understandable reaction, especially in the current economic climate.

Such a decision, however, should really be made in a wider context, and cannot simply be an uneducated black and white choice between Open or Closed sourced testing tools.

Perhaps one of the more widespread beliefs in favour of Open Source test tools is that they are free and, therefore, will save the company money. Whilst both of these statements are correct, neither provides the complete picture or should be a basis for the decision. Open source is indeed free and can save the company money, but that is only a saving on the initial license cost.

There are a number of other cost factors that need to be taken into account in this decision and, unfortunately, some of the cost factors associated with Open source tools are more difficult to determine than with the more traditional Closed sourced tools. Training costs, resourcing availability & cost, support accessibility and organisational adoptability are all very important. Some of these are not of immediate interest to projects but for companies with multiple projects it becomes more and more important. Putting a cost against these can be very difficult.

Let’s use an example. A client was using an Open source tool for performance testing; it worked fine, but a tweak had to be made to the code to enhance its usability – this was easily done and the tool was then a great asset for many projects. However, a little way down the line some strange behaviour was noticed on response times, and it turned out the core engine of the open source tool was sending out double requests. In all previous uses the second request had been in milliseconds and never picked up (it was not expected), but in this more recent project the second response was much larger than the first (and only the first response was equal to the user experience). The problem was located in the core engine of the tool (not touched by the enhancements made when it was first introduced) but unfortunately the core engine was not as open as it was thought. There was also no response from the tool community.

Now this might have been because the company made a poor original tool choice, but at the time were not aware they were making a bad decision, and this was a factor that was not included in any the cost projections.

I am not professing that we need to go for “Closed source” (or, in other words licensed) tools; absolutely not. There are some fantastic Open source tools available through the internet these days, but it is important to really look at all the aspects that are involved in tool use, way beyond just the initial license cost.

Ask around and see what the experiences are of the various tools on your radar. An independent assessment of the use of test tools in your organisation can also greatly aid a tool choice/implementation. If in doubt give us a call! Here at Sogeti we are more than happy to help you with your choice.

Marco Venzelaar AUTHOR:
Marco is a HPE Tooling Specialist and managing consultant for Sogeti.

Posted in: A testers viewpoint, Closed Sourced, functional testing, non-functional testing, Open Sourced, Opinion, Software testing      
Comments: 0
Tags: , , , , , , , , , , , , , , , , ,


There was only one thing that came to mind this morning when I read this article. I have seen this before in IT projects, especially in the performance testing discipline!

I have talked about the F-35 before on this blog post, and this time I’d like to approach from a different angle. The news article linked to above is suggesting that the requirements are being ‘dumbed down’ to meet the performance of the aircraft. Besides the typical headline grabbing use of the words, it is not something new. I mean to say not something new to IT projects. I am sure we’ve all been somewhere where the functional requirements were amended (sometimes temporarily until the next release) so that it could be claimed that the project delivered to spec.

I have also experienced this with performance testing, where the non-functional requirements stated a 10 seconds response time. This was something “that came straight from the business”, so it must be have been true. Unfortunately the business requirements need managing too; they often need to be pulled into context and then into reality. If you browse the BBC News website you expect a page to appear within seconds, but don’t expect a sales report across multiple regions with multiple products for the last 12 months to appear within the same time…. (unless it is cached, of course!).

I’m sure we don’t know the finer details of this move by the Pentagon, but if true there will be a massive job in changing/managing the end-users expectations and letting aside the public opinion. Was it a case of misaligned user expectations? Overselling its capabilities? Or simply a design flaw? Fortunately in IT projects we don’t have to worry too often with public opinion, but there are always some users that might be as demanding as those users that will be using the F-35.

Don’t get me wrong; I do think sometimes moving the goal post is justified, but that can only be agreed on with an open communication line between delivery and users. Managing user expectations is as important as a working product.

Marco Venzelaar AUTHOR:
Marco is a HPE Tooling Specialist and managing consultant for Sogeti.

Posted in: A testers viewpoint, functional testing, non-functional testing, Performance testing, Requirements, Software testing      
Comments: 0
Tags: , , , , , , , , , , , , , ,


The wintry weather in the UK this week has brought with it a lot of comments in the news, and throughout the business community, about being prepared for any eventuality. Do we really need to be prepared to this level? We can do a risk-based approach on this; if you live in the city, you don’t have to prepare yourself for being totally snowed in, as much as someone out in the sticks would…Or do you?

The same goes for testing and automated testing. Is your regression suite up to date? Are your expected results up to date? Yes, expected results can change! It isn’t a case of building the regression suite (automated or not) and that’s it until the end of time.

Currently I am testing a Business Intelligence data warehouse and that is, of course, purely based around data and changing data.
If you have ever doubted the usefulness of test automation, just think about the following. Prior to each release (into the test environment) we need to run the test suite to ensure the tests are still running OK, but we should also check if the expected results are still up to date. Then we get the deployment and then re-run the whole suite again. Any differences in (massive) reports can then be analysed; errors are likely to either be a result of an update calculation, or something going wrong in the database or between the database and the front end.

It is important to see that we were able to run 2 test executions in 2 days without much effort or delay. When we look at test automation we should not just count the number of releases we are planning to perform, but also (per release) how many executions would you run. In the above example I didn’t even mention the potential reruns after a defect has been fixed.

Being able to run the automated tests prior to a release ensures the test suite is fully prepared. While the test team executes the manual tests (yes, we don’t have 100% automation) we also execute the automated test suite. At the end we know exactly what failed during the automation execution and this can then be further investigated by the test team.

Being prepared means we can focus on the areas that need our attention, and we’re given more time to plan for, and work on, other things.

Marco SnowmanBy the way our snowman looks magnificent…. we need just a little more snow so that we can make the snow dog too!

Marco Venzelaar AUTHOR:
Marco is a HPE Tooling Specialist and managing consultant for Sogeti.

Posted in: Big data, Risk, Software testing, Test Automation, Uncategorized      
Comments: 0
Tags: , , , , , , , ,