Autism – a disability that is rarely spoken about in a workplace and when it is, it’s usually surrounding the Equality Act 2010. It has been stated by the National Autistic Society (NAS) that employers tend to be anxious, ignorant and prejudiced about taking on autistic staff. In addition, they have also discovered that only 15% of adults with autism are in full-time paid work; but it shouldn’t be like that. Universities such as Cardiff University are trying to increase the number of young autistic pupils at their school. The university has even devised the ‘Discovery Project’ to allow current university students to help raise pupil’s (14-19 years) aspirations, overcome their anxieties and help improve the skills they need for university (BBC). More than 1 in 100 people in the UK have a form of autism ( and what many people don’t know, employing a staff with autism can be positive.

Although people with autism may lack social skills and find it difficult to communicate, companies such as SAP are continuously employing people with autism in the information technology department. The company train the employee specifically for the role and generally the role will be in software testing! People with autism tend to be highly focused and notice differences very quickly which is highly beneficial during exploratory testing. In addition, they tend to remember a lot of information and are keen to do things perfectly. People with autism generally like to follow a schedule and prefer working on a task-to-task basis. Sometimes an overload of work may distress the individual so employers need to ensure that they are clear, concise and not all tasks are given at once (SAP-TV) .SAP state that the results from their employees with autism are outstanding and want to train and hire nearly 700 adults with autism at their offices globally.(

As a recent graduate myself, I have experienced working at university with people with autism and consider them my friends. The students in my class with autism were accepted and in some cases it did take a while to understand their thought processes but once we did, it never caused a problem. However, I have witnessed other students at the university mimic the individuals and often brand them as ‘weird’. I found this extremely uncomfortable and was embarrassed that such a new generation could be so excluding. Research additionally shows that employees that are not autistic often can be aggressive, rude and may even bully the individual with autism.

So what can we do?

We should understand an individual with autism’s thought process and additional needs. Hopefully by doing this, employers will understand that employing staff with autism is not negative and that everyone should be given the chance to prove themselves. Most importantly it would make the individual extremely happy to have a place of work.


Further reading


AUTHOR: Sogeti blog

Posted in: communication, Exploratory Testing, Human Behaviour, Human Interaction Testing, project management, Research, Social Aspects, Software testing, Transformation, User Experience      
Comments: 0
Tags: , , ,



Image credit : mediashift

Agile development is clearly a growing trend. It’s positive impact on time to market and customer satisfaction make no doubt. However, involving off/near shore teams is a challenge. Some key success factors of effective agile developments are:

  • Frequent interactions between team members
  • High visibility of the tasks to complete (by using Kanban for example)
  • Simplified process allowing frequent deliveries

The question is : “How can I apply these KSF with my distant teams?”

Here are some ideas I tested with my own teams or shared by scrum masters I worked with:

  1. Identify the agility awareness of the offshore team: Do they know the main principles? Are they willing to change their way of working? If not, you can start with trainings.
  2. Identify your current communication means: calls/videos/e-mails? If your team members are mostly communicating using e-mails, then you should try to have people talking together.An introduction of each team member during a video conference could be a good start.An event during which team members would meet each other would be even better, and a good investment for the future: people tend to collaborate more if they know the recipients of their e-mails/calls.
  3. Identify the tools you can use to increase communication and visibility of the tasks: JIRA + Greenhopper can definitely help. But if you’re creative you can also find easy solutions: both teams may have their Kanban and send a daily picture of the current situation.
  4. Optimize your processes to increase time to market and make sure all teams are aware of the expectations and dependencies on each side.
  5. Ensure difficulties/blocking points of each team can be identified and shared to find global solutions. Retrospectives are helpful and should be conducted wherever you are located. Any issue slowing the deployment process down should be addressed with high priority.

But don’t forget that each Agile team has to find the solutions best suited to its way of working. So my main advice would be to  address this point during retrospectives and help the team proposing its own solutions. The acceptance and commitment will be much higher.


Lionel Matthey AUTHOR: Lionel Matthey
Lionel Matthey has been Senior Consultant for Sogeti Group since 2012. In this role, he is responsible for delivering Testing and Project management services to our clients. He is also in charge of the research on good practices about agile testing. Several knowledge sharing sessions have already been organized to present the result of his studies and practical examples of the lessons he learnt in Swiss banks. Lionel is also involved in the preparation of answers to clients’ service requests about testing (RFPs, RFIs…). Previously, he worked for other IT consulting firms as Test Manager, Business Analyst or Project Manager. In 2007, after a few months of testing automation, Lionel moved to Zurich to take over the management of the tests on a large project at UBS. He later came back in the French speaking part of Switzerland to strengthen his skills on many aspects of testing at different clients like Orange, Rolex, Givaudan or Nestlé. In 2010 Lionel accepted the position of Project Manager/Scrum Master on Nestlé’s third and largest agile Project at that time.

Posted in: Behaviour Driven Development, communication, Data structure, Human Behaviour, Human Interaction Testing, Infrastructure, Research, Scrum, test data management, Test Driven Development, Transformation      
Comments: 0
Tags: , , , , , , , , ,


One of the challenges of today’s corporate world is the need for effective internal communication and employee engagement. Management and communication teams are concerned that traditional channels used to distribute news (emails, paper or electronic newsletters) are not paid enough attention; they are either left unread, read too late, or lost between other unread mails. The consequences of this lack of readiness are affecting the relationship that the company hopes to have with all of its employees. But all is not lost…


What causes this and how can be resolved?

Communication is vital across all fields and especially in our global, interconnected society, where a decisive business leadership model has become the established way of contact for many seeking opportunities for growth and survival.

Through time, each new generation has experienced the use of different tools for communication with their friends, family and employers; from fire signals, cave paintings, paper, letters, TV, radio and, finally, to today’s internet and applications. While these tools can bring so many rewards to our day to day activities, they can also be useless if they are not used correctly. If a company wants to encourage readership of communications by its employees, it needs to acknowledge that it is it’s own responsibility to provide the right tools that enable those communications to reach staff in an effective and efficient way.

McFarlane, 2010, defined efficiency as ‘acting with speed; saying things fast, delivering a 20 minute speech in five minutes, using a PowerPoint presentation to present an extended essay to a class’, thus, ignoring some irrelevant details. A good corporate social media strategy should be able to claim its efficiency by minimising wasted time and effort needed to access important news.  Furthermore the strategy must be effective by providing information that meets employees’ interests and that they would actually want to read.

Most traditional communication models are based on an inefficient and ineffective ‘paging’ model – users are typically unaware what will be on the next page and they are forced to scroll or read without a specific intent. In the UK, for example, ‘Metro’ prints thousands of newspapers and hand them out to people for free every day.

However, in a recent UK observation 1, 6 out of 10 train passengers on a train were holding a phone and reading materials from their device, scrolling through random contents but selecting which articles they actually want to read, and in what order. This is a very important difference with social media models compared to the newspaper model, and it’s all about ‘interest’.

This is not a challenge that ‘digital tactics’ alone can solve. A reader is more likely to read a story if it is either relevant (i.e. has a direct impact on his/her life) or it is interesting (the story does not impact him/her, but he/she is intrigued enough to know more about it anyway), so journalists and communications professionals still need to write a compelling story or post. In addition, while many companies have adopted the use of social media in the workplace through tools such as Yammer, Jive Engage, etc. they have experienced poor levels of engagement with them. That’s not because they are not great but that is because the intended information is hidden within these expansive, wrongly used tools and hard to find.

People value communication that is more efficient than effective; that is, speed is valued over quality points for the sake of instant impact and brevity. That means directing employees, instead of expecting them to sift through a large pool of information, along the right path of digital contents. Therefore, businesses need a new communication strategy to meet the needs of today’s employees – a fast, modern approach that is capable of tagging, filtering, or present the news based on individuals’ interests so that readership can increase.

PEW research centre declared that, during the last 10 years, usage rates of social networks have increased from 10% to 74%. The fast growth of technology has caught everyone’s attention regardless of their gender, age, and education. Another study declared that 85% of social networks are accessed via smart phones and via mobile applications. Here at Sogeti UK, we have listened to this research and built upon it, to tailor our own solution and design a social app that meet the requirements of our staff in this fast growing digital age.


What we’ve done

As of this week, Sogeti UK has designed and launched the second version of a mobile communication application in house. This application allows Communications teams to effectively and efficiently voice the organisation’s news updates. It showcases regular updates from each of the departments within our business, provides information on internal and external events, and within the My Sogeti section staff can find a handbook, staff directory, and access to frequently asked HR questions.

Version 1 of our app was released in May 2014 and within the first 4 months it was downloaded by 75% of Sogeti UK’s employees. We found that our staff felt more engaged with the organisation because the information they wanted to see was now at their fingertips.

Version 2 soon got the go ahead and we distributed it through HockeyApp – a private application distribution platform. The new version saw the introduction of new sections, an update to the look and feel of the user interface, and gave the Communications team the ability to send short push notification messages to staff.

We’re really happy with the feedback so far, and are already planning version 3!


Want an app like this?

We would love to design, develop and test an app for you! We can cover the whole application lifecycle and even ensure it’s completely secure, like ours.

Contact us today on: +44 (0)20 7014 8900, or email



Donovan A. McFarlane, 2010, Social Communication in a Technology-Driven Society: A Philosophical Exploration of Factor-Impacts and Consequences. American Communication Journal, Volume 12,winter 2010. Retrieved August 7th, 2015, from

1. The statement is based on an experiment that was conducted in London underground between 8-9 am. During this experiment, 300 people were observed over 30 days.

Social Networking Fact Sheet, Retrieved August 5th August, 2015, from


Anahita Mahmoudi AUTHOR: Anahita Mahmoudi
Anahita Mahmoudi is a Principal Consult at Sogeti UK with a drive to design and manage development projects. She is a design innovation consultant with a focus on design for human interaction studies. Anahita also chairs at ‘Meet The Future’ committee in the UK and is promoting cultural transformation to improve colleagues’ productivity and engagement.

Posted in: communication, Developers, Digital, Digital strategy, Human Behaviour, Human Interaction Testing, Innovation, Quality Assurance, Social media, Social media analytics      
Comments: 0
Tags: , , , , , , , ,



When did you have a truly clarifying conversation around IT-related high-order concepts? I mean for instance Big Data or the Internet of Things? Since June 2011, the website is dedicated to this specific moving target. It’s a pet project of Mr. Gil Press, Forbes columnist and former Senior Director Thought Leadership Marketing at EMC Corporation. In October 2015, Dell bought EMC – famous for its Data Lake 2.0 approach – at a staggering 67 billion dollar, which qualifies as the largest deal in tech history ever. Image credit : google images

This conclusively indicates that Big Data means Big Money, and that its importance is only about to grow. One month later however, Textio and BloombergBusiness noted that Big Data in 2015 had lost over 35% as a prominent buzzword in tech job postings. Gil Press commented:

‘Two years ago Big Data was everywhere. Companies bragged about using it. New investment funds and top 40 bands were named after it. Engineering job listings containing Big Data were significantly more popular than those that did not. But today, Big Data has become so highly saturated that its use has passed into cliché.’

Textio and BloombergBusiness’ top winners were Artificial Intelligence and Real-time Data, which serves to show that Big Data has made it to the next level of Insight and Analytics. The ever so famous World Wide Web, dub-dub-dub, or simply Web went the same path and is today being overwhelmed by the mysterious Internet of Things; more accurate of Things and Sensors and Actuators (Vint Cerf); and on a high level of Things and Services (Bosch). That is quite correct since many other protocols have enriched TCP/IP and overtook the dominance of simple HTTP. Most non-technical people don’t care – which means don’t want to understand this – sometimes to the effect that IoT is being ridiculed as the Internet of Thingamajigs.

Around 2006, after Pervasive Computing, Ubiquitous Computing and indeed the Internet of Things, the notion of Cyber-Physical Systems (CPS) was introduced to eradicate confusion, to repair all damage done and to reconcile viewpoints. But terminology is hard to get rid of, and CPS developed as just another buzzword, next to the Industrial Internet, the Industrial Internet of Things, Industry 4.0, and – let’s not forget – the amazing Internet of Everything.

Even to experts it has become a quite confusing landscape. In May 2014, Sathish AP Kumar of Coastal Carolina University, a Senior IEEE Member and a University Senator raised this question on the scientific community website ‘I am hearing the terms Internet of Things and Cyber-Physical Systems interchangeably and wondering what exactly the difference between these terms are?’

Afer a few weeks, Mr. Kumar, who works in the fields of Cyber Security, Cloud Computing, Big Data Analytics, and Bio&Health Informatics, was the first to answer his own question by referencing slide #10 from a presentation called the ‘Internet of Things towards Ubiquitous and Mobile Computing.’ It was delivered by Professor Dr. Guihai Chen in October 2010 at the Microsoft Research Faculty Summit in Shanghai.

Dr. Chen’s explanation goes like this: the Internet addresses the Cyber World and the Internet of Things the Physical World, aiming at Ubiquitous Connections. Now, as the notion suggests, Cyber-Physical Systems combine the two, aiming at what Mr. Chen calls Harmonious Interactions. Go draw these relations and you have slide #10. The distinctions, overlap and directions are very clear, especially when considering what Mr. Chen had pointed out just before.

He said: ‘Smart Planet, Pervasive Computing – all such buzzwords refer to the same balloon. When blown to large size it is called Smart Planet; when to middle size it is called Cyber-Physical Systems; when to small size it is called pervasive or embedded system. And IoT and CPS are actually a pair of twins.’

More than a year later, in August 2015, Mr. Nikos Pronios from Innovate UK, brings Industry 4.0 and the Industrial Internet of Things to the table in the ResearchGate discussion thread. Thereafter, Dr. Imre Horvath, a professor at Delft University of Technology, further enlightened matters with a short remark: ‘We may consider in our discussion the growing level of synergy between the physical hardware (both analogue and digital), the digital software (control, middleware, application programs), and the cyberware contents (media, data/info, codified knowledge, concept ontologies, learnt agency).’

Eventually, in November 2015, our attention was drawn to a special Elsevier science magazine issue on the combined topic of ‘Cyber-physical Systems (CPS), Internet of Things (IoT) and Big Data. The special issue of ‘Future Generation Computer Systems – The International Journal of eScience’ will be published in October 2016, and the webpage contains this extensive clarification:

‘Cyber-physical Systems (CPS) are emerging from the integration of embedded computing devices, smart objects, people and physical environments, which are typically tied by a communication infrastructure. These include systems such as Smart Cities, Smart Grids, Smart Factories, Smart Buildings, Smart Homes and Smart Cars.

The Internet of Things (IoT) refers to a world-wide network of interconnected heterogeneous objects that are uniquely addressable and are based on standard communication protocols. These include sensors, actuators, smart devices, RFID tags, embedded computers, mobile devices, et cetera.

The design of Cyber-physical Systems and the implementation of their applications need to rely on IoT-enabled architectures, protocols and APIs that facilitate collecting, managing and processing large data sets, and support complex processes to manage and control such systems at different scales, from local to global. The large-scale nature of IoT-based CPS can be effectively and efficiently supported and assisted by Cloud Computing infrastructures and platforms, which can provide flexible computational power, resource virtualization and high-capacity storage for data streams and can ensure safety, security and privacy.

The integration of networked devices, people and physical systems is providing such a tantalizing vision of future possibilities that IoT is expected to become a vibrant part of the digital business landscape.’

All in all this is inspiring guidance for coming to grips with Information Technology and ‘Anything Internet’ in 2016.

You can also listen to the podcast on


Jaap Bloem AUTHOR: Jaap Bloem
Jaap Bloem is in IT since the PC and now a Research Director at VINT, the Sogeti trend lab, delivering Vision, Inspiration, Navigation and Trends.

Posted in: Big data, Cloud, communication, Digital strategy, Innovation, Internet of Things, IT strategy, mobile testing, Security, Smart      
Comments: 0
Tags: , , , , , , ,


The Pioneers Festival is one of the world’s most popular and exciting events showcasing future technologies and this year in May in Vienna, it brought together several large corporate enterprises; 1,600 nimble, groundbreaking start ups; 400 tech savvy investors; the media and 3,000 delegates. Arguably the most exciting presentation was Dirk Ahlborn’s passionate unveiling of the secrets of Elon Musk’s new high speed transportation system, the Hyperloop.  Six months on, we’re revisiting this pioneering invention and asking…where are you now?

Image credit:

Hype or Hyperbole?

The brainchild of Elon Musk, the Hyperloop consists of a giant suspended vacuum tube system containing people carrying capsules, suspended by a magnetic levitation system and travelling at the speed of sound.  That’s up to 1200KM per hour or 1KM every 3 seconds – about 745mph for those of us across the pond. Why not simply travel by plane? Well the purpose of the Hyperloop is to travel relatively short distances, up to 900 miles, say from L.A to San Francisco, which are not suitable for supersonic travel as the whole journey would be spent ascending and descending with no actual flying in between.

Elon Musk declared that he didn’t have time to build the Hyperloop himself, leaving the playing field wide open for companies like Hyperloop Transportation Technologies (HTT) and Hyperloop Technologies (HT) to race to be the first to build a working prototype and determine a proof of concept. Dirk Albhorn of HTT gave a rousing presentation about the manifold benefits of the Hyperloop, pointing out that it runs on kinetic, solar and wind energy and creates more energy than it uses, making it green and economical and creating a useful second income stream. This, coupled with HTTs intention to use the internal walls as a giant advertising billboard, is good news for potential passengers as it’s possible they may get to travel on the Hyperloop for free.

For those concerned about safety the Hyperloop is a closed system, managed by computer, leaving little room for human error, which is the biggest cause of rail disasters and furthermore it is allegedly crash-proof.  Another benefit is that although not cheap to build, the Hyperloop will cost a mere $6-7billion compared to the $65bn for the current high speed rail plans between L.A. and San Francisco.


Speaking at the Construct Disrupt event in London, Bibop Gresta, COO of HTT, claimed the Hyperloop would be “the closest thing to tele-transportation”. ”It will change humanity,” he said. Of course, not everyone shares his enthusiasm and in June this year Forbes ran a rather damming article entitled “The Voyage No One Wants to Take: Why The Hyperloop Is A Catastrophe-Waiting-To-Happen.” Forbes main objection was that it would simply be too darn uncomfortable to travel in the Hyperloop and it precludes large turns or changes in elevation, all of which is true, but not insurmountable with the right design and engineering.

Image credit:

Hyperloop Then and Now

At the time of the Festival HTT had just struck a deal with landowners in California Quay Valley to build the first full scale test Hyperloop in 2016. HTT had so far refused any external investment form the 400 accredited who had made offers, preferring instead to crowd source or as they call it “crowdstorm” and invite students and eminent professors, physicists and engineers to offer their time for free in return for shares. A leap of faith perhaps, but one that had been readily tackled by the enthusiastic 10,000 strong Hyperloop community who had already committed $4,750,000 in man hours and donations at the time of the Pioneers Festival.

So where are they now? Well word in the press is that HTT have now opened up the investment doors and secured $26 million in funding so their investment strategy has moved on in the last 6 months and the company made clear it intentions to go public at the end of this year. Although Elon Musk’s company SpaceX is not developing a commercial Hyperloop of its own it has organised a competition to encourage students and engineers to develop prototype pods and has built a one mile test track in California to get the ball rolling. In June 2016, competition entrants will bring their Hyperloop pods along to the test track for a futuristic twist on the Gumball Rally! The possible issue with this as a true test is that the test track can only allow speeds of up to 160mph; a lot slower than the intended final speed of the Hyperloop. CNET summed it up rather well in their recent article when they said:

“The Hyperloop idea is a cross between science fiction ideas from the future and pneumatic interoffice message-delivery systems from the past. It’s not yet clear whether it will catch on broadly as a replacement to trains, buses, cars and airplanes, but it is clear Hyperloop Technologies has big ambitions for the transit technology.”

What do you think of the Hyperloop? We’d love to hear your point of view so leave us a comment below.


AUTHOR: Sogeti blog

Posted in: architecture, Automation Testing, Data structure, High Tech, Human Interaction Testing, Innovation, IT strategy, Quality Assurance, Requirements, Research, Social Aspects, Software Development, Transformation, User Experience      
Comments: 0
Tags: , , , , , , , ,



While on an engagement with one of Sogeti’s clients I spotted an opportunity to introduce a service strategy based on TMap in order to provide a Testing Service.  Its implementation would have to follow the ITIL Life-cycle stream.  This would in effect be an on-premises TPaaS.  A Testing Service would need to be treated and developed as an ITIL Service and viewed by the business as an ITIL Service to ensure best-practice.

At the time I was involved with testing customer facing applications involved in a Data Center relocation.  I brought QC 11 off the shelf, where it had been collecting dust for the previous 3 years. First, l needed a Service strategy.  I intended to store all of the test assets in Quality Center.



The client’s test artifacts were all over the Estate on a variety of platforms, none of which with a particular test focus.  These included Jira for test requirements and defects, Auspex for test plans, project documentation, and test results.  Test Cases for the most part were held in Excel spreadsheets.  This situation was going to make life more difficult for me managing five separate projects at the same time.  Tracking project/test documentation in such disparate repositories would have put the test schedule at significant risk.

Often there were long searches for test and/or project assets needed for testing a project.  This was primarily due to the lack of a single repositories and inadequate search facilities.

Testing was slipping its schedule.  Issues arising from the current situation revolved around a lack in confidence in the test management of the Data Center migration.

This situation gave me the opportunity to re-commission Quality Center, using ITIL Best-Practices, in a PoC to demonstrate to the business, the advantages the tool could bring to it.



So, what could Sogeti do to make a difference?  Sogeti’s Testing Services include Test Management and Automation offerings.  The solution I had in mind combined the two, through tooling supplied through an alliance partner, HP.  HP ALM would need to be treated as a Service and viewed by the business as a Service offering.


The benefits of structured testing

Sogeti’s world-leading structured Test Management Approach – TMap®, I saw as the way to help the client deliver more complex, high quality software, faster so saving my client both time and money.  TMap provides a complete tool box for setting up and executing tests, including detailed and logical instruction to testers.

TMap is a proven method of structured testing, based on Sogeti research and user experience. It provides a complete and consistent, yet flexible approach, which is suitable for a wide variety of organisations and industries. Therefore TMap has been selected as the standard test approach by many leading companies and institutes in Europe and the US.



The TMap structured test approach provided the client with the following advantages:

—     comprehensive insight into the risks associated with software quality

—     transparent test process that is manageable in terms of time, cost and quality

—     early warnings when product quality is insufficient and defects can occur

—     shorter testing period in the total software development lifecycle

—     re-use of test process deliverables (such as test scripts and test cases)

—     Consistency and standardization; everyone involved speaks the same test language.


TMap and ITIL

ITIL is the most widely recognized framework for ITSM in the world. In the 20 years since it was created, ITIL has evolved and changed its breadth and depth as technologies and business practices have developed. ISO/IEC 20000 provides a formal and universal standard for organizations seeking to have their service management capabilities audited and certified. While ISO/IEC 20000 is a standard to be achieved and maintained, ITIL offers a body of knowledge useful for achieving the standard.

TMap with ITIL Brother-in-Arms.

ITIL Life-cycle stream

ITIL provides guidance to service providers on the provision of quality IT services, and on the processes, functions and other capabilities needed to support them.

TMap provided the ALM strategy to form the ITIL Service Strategy.  The lifecycle starts with a service strategy.  An understanding who the consumers of the service offering are, the IT capabilities and resources that are required to develop the offering and the requirements for executing them successfully.

My role with the client started with the Service Strategy and flowed through to Continual Service Improvement.

I worked closely with the client in service design to ensure Quality Centre was designed effectively to meet my client’s expectations and support the Test Strategy.

In service transition I built, tested and moved into production the solution, which enabled the business client to achieve the desired value.


Bottom Line

TMap and the ITIL Life-cycle stream helped me achieve what had seemed impossible for my client: better service, at lower costs.


Information and technology (IT) governance

ITIL is a recognised best practice for implementing IT Governance.  It further defines how we implement IT Service Management and where Testing sits in the model

AUTHOR: Sogeti blog

Posted in: Data structure, Human Interaction Testing, Infrastructure, Innovation, project management, Quality Assurance, Research, Software Development, Software testing, Test Driven Development, Testing and innovation, TMap, TMap HD, TPaaS, Transformation      
Comments: 0
Tags: , , , , , , , , , ,




In my previous post, I outlined why holiday readiness is important and how to get started. In this post, I will discuss some of the approaches to optimize the infrastructure that will handle the holiday traffic.


Tuning an App in isolation is like buying a million-dollar super car, and having no race track to drive it on. Sure you could drive it on the street, but you wouldn’t be taking full advantage of the car’s potential. This is why, it is important to optimize all layers of your site’s stack.


Making Changes Under the Hood

There are several aspects of an e-commerce operation that need to be optimized. As with any site, the constant changes (new projects, new content, etc.) typically tend to undo the optimizations from the previous year. There may also be other factors such as new infrastructure or new versions of software in your application stack. Organizations tend to discount the impact of tuning infrastructure (especially if it’s virtualized), and this should be your starting step.

Infrastructure and Application Tuning

A site could have the leanest software across the stack, and it would not be able to handle traffic if the underlying hardware does not have adequate capacity. Allocating appropriate hardware, with room for contingency, is the first step towards achieving a stable environment and application stack during the peak holiday season.

The next step is to take a deeper look at the configuration of the application server stack. Most application stacks (application servers, databases, etc.) tend to have very conservative default settings and those are great for normal traffic. Handling heavy holiday traffic, however requires probing, and determining the optimal settings across each layer to ensure smooth performance.

Code Optimization

Having the right hardware and application server stack allows an application to perform to its full potential. However, not all codes are equally written. Most developers do not write code with the crazy holiday performance metrics in mind, and it is up to the performance engineers to do thorough analysis, and identify bottlenecks in the code. Use of tools such as AppDynamics to identify issues within code, and fixing those issues, can boost site performance significantly.

Final Thoughts

Achieving the right performance gains is a matter of an iterative exercise to optimize various parameters and get the correct balance. This is not a process that gets done once and forgotten, it needs to be revisited frequently to ensure that your application is performing to its true potential, and meeting customer expectations.

image courtesy:


Nihar Shah AUTHOR: Nihar Shah
Nihar Shah has been a consultant with Sogeti USA since 2008, working with various multi-national retail clients in the Columbus region. A technology enthusiast by nature, Nihar has worked on various platforms from the IBM and Oracle stack, and has taught himself languages and frameworks such as Ruby, Objective C, and NodeJS. He has worked extensively in designing high volume, reliable and scalable systems using the J2EE stack. In addition to client commitments, Nihar is the practice manager for Digital Transformation practice for the Columbus region and is actively involved in crafting digital solutions for various clients in the region.

Posted in: Data structure, Digital, Infrastructure, Innovation, Internet of Things, IT strategy, Research, Software Development, User Experience      
Comments: 0
Tags: , , , , ,


CIO#3 wordleIn my last blog,  I described the first steps that Peter Sommer took to handle the challenges facing him as the new CIO at Olfama.

The small change in the service desk is already proving to be successful – response times during a systems failure has dropped significantly and though not pleased with system failures, some end users are acknowledging the improvements in service during such failures.

The changes to the configuration management process are implemented, but Peter knows that it will take a while before the improvements are measurable as better systems availability. The operations staff will have to work very hard for a significant period before the improvements make their job easier.

Application landscaping resulted in a list of systems that are candidates for decommissioning as they have very few users, do not support business processes very well and are based on outdated and unsupported technology that Olfama does not have the competencies to maintain anymore. Peter decides to form a small team of business analysts to find replacement systems matching the remaining application landscape as well as the architecture principles and the infrastructure roadmap.

Image of two young businessmen using touchpad at meetingImage of two young businessmen using touchpad at meeting 

Peter analyses the causes behind the many projects running over time and budget. He realises that one cause has already been addressed – insufficient configuration management making path to production difficult. Secondly, development staff is working on multiple projects simultaneously and constantly shifting from one project to another due to shifting priorities and waiting on deliveries from other projects on the dependency list or waiting for end user involvement. Diving deeper into the missing end user involvement, Peter finds that the same users are key to several concurrent projects and at the same time key to operations in their own area – a situation that is making planning almost impossible.

He approaches the director of Business Development, John Winter, to learn what kind of project portfolio management is in place in the company. John tells Peter that project funding is distributed and any project can run, but those projects directly implementing a corporate strategic initiative takes precedence. Peter suggests that a more systematic approach would have the potential of increasing project success rate or at least improve the chances of staying within time and budget.

Peter can show John, that the change projects involving IT is supporting half of the corporate strategic initiatives and 70% of these projects are supporting a single strategic initiative. John argues that it would be unlikely that half of the strategic initiatives does not need IT and something must be wrong. They agree that Peter and John together should formulate a project portfolio management process and have top management approve it as a corporate standard.

In the next blog, I will write about the processes providing project portfolio management and application portfolio management that Peter has started to improve.


AUTHOR: Sogeti blog

Posted in: Business Intelligence, Innovation, IT strategy, project management, Quality Assurance, Research, Software Development, Transformation, User Experience      
Comments: 0
Tags: , , , , , , ,


We usually approach IT matters as source of disruption in our lives but we don’t mention the effect caused by this revolution on the IT world itself. Infrastructure as we used to know is about to pass away.

It has become so cliché to talk about disruption nowadays. This past decade and this will go on for sure in the future, we heard about the irremediable impact of IT in our everyday life: internet, web 2.0, wearables, 3D print, bitcoins, IoT, blockchain… and many more. We talk about the way we now buy, pay, book, rent, sell, share… and I could go on and on.


How did the Datacenter evolve since it appeared and how it impacts us – IT professional – today and for the next decades?

Before answering the question, let’s first agree on a short definition of the word Datacenter. It is a computing facility with infrastructure and storage elements plugged on secured power supply. Therefore, we can consider ENIAC as the grandfather of the datacenters we know today. ENIAC was built by the US Army to store artillery firing codes. No other computer had comparable storage and calculation capacity in history. The entire installation required more than 160 square meters. It was a great hardware innovation. A great industrial innovation as well.

Following ENIAC, was TRADIC. It was built in 1954, also in the US. TRADIC is the first real commercial application of datacenter concept, and reached real success in the 60’s. Prior to TRADIC, mainframes were mainly built for government and military purposes and from this point, IT was finally open to companies as well.

Then, came the CDC 6600. It is remembered as the first Super Computer. This next-generation system claimed a speed of 100 nanoseconds, based on 24-bit architecture and was the first to include a CRT console. Yes another Hardware innovation. To possess one CDC, you still had to pay no less than 8 million dollars. Only big players could afford such expense.

Finally, ARCNET. We can consider ARCNET as the ancestor of the Internet. At this time, the revolutionary idea was to connect computers to a shared floppy data drive via coax cables. After that point, TelCos entered the innovation tracks. And a couple of years after, came the PC. Mainframes were extremely expensive and required enormous resources in space, operation and cooling. So to introduce IT for more and more companies (not only for majors)IBM designed the first recognized PC in 1982. Companies worldwide began deploying computers throughout their organization and saw a significant benefit in computers, IT penetration rate started to take off forever.

What we have seen so far, regarding datacenter history and evolution, is that the driver was “hardware innovation”.

The schism in the IT infrastructure paradigm

We – IT guys – used to reign within companies, coached by hardware brands and innovation in the past. We used to explain how important it is to have full 100Mb/s network or SCSI external storage… We used to dictate what is compatible with what.

But in the 90s the IT penetration rate increased also in the consumer world, so we were not the only ones to think we understood IT anymore. The end-users was becoming more and more powerful.

The Internet which reached an inflection point toward mass-market adoption in the mid-1990s, brought a feeding frenzy to datacenter adoption in the late years of the 90s. As companies began demanding a permanent presence on the Internet, the data center as a service model became common for most companies.

But the most important change that describes this schism is that we don’t care anymore about infrastructure and I’ll explain it further more.

While hardware innovations were the gasoil of ours changes in the first half-century of Datacenter’s life, today we reached enough maturity to wonder “Why do we go for this innovation on IT”. I mean, IT guys are no longer dictating what tools must be used. Nowadays, the newly empowered users are mature enough to define their IT needs and it’s their turn now to dictate their wishes.

I want to be able to work anywhere, at any time and don’t try to explain to me it’s not compatible because today everything is compatible with everything. From anywhere, I can book a house in Honolulu on Airbnb either on my Android tablet or on my office computer, even from my TV and to authenticate I just use my Facebook account and you try to explain to me why our CRM is not compatible with our email system and that it is normal to juggle with tens of passwords! Come on! You IT guys need to escape from prehistory.”


… are you able to give me the brand of hardware which hosts your most critical professional data?


And do you know if it is a SCSI, IDE, SATA II, III, virtual or whatever technology used to store your data? We don’t have a clue (even us, IT guys), and we just don’t care.

The Cloud is here and the mutualization of IT resources as well and it concerns both private and public visions.

Datacenter curve

On top of this, SDN and containers functionalities are coming with the version of Windows Server 2016. Respectively, the first feature will bring for everyone the capability to move IT services from OnPremises to any cloud provider with no outage and almost a drag & drop. The second feature will make developers free to code regardless of backoffice frameworks and frontoffice device/browser that will be used.

I have worked in IT within business world for more than 20 years and I see the shift on the definition of IT strategy. We used to follow innovation from manufacturers to bring additional capacities and capabilities to the business and we were progressively escaping from hardware considerations, from brand considerations, network and location considerations. With the rise of web-bases apps, we are now even escaping from end-user OS and devices. On top of that, dockers and containers are coming and we won’t even care about DB, storage, deployment tools underlying anymore.

IT Infrastructure, as we used to know it, is dead. Does it mean that our business is too? Hell no!
And that will be the topic of my next article. Stay tuned to Sogeti Labs.


Kamel Abid AUTHOR: Kamel Abid
In summer 2014, Kamel Abid celebrated his 20th year as an IT professional, while swimming among computers and keyboards since the age of 5. Today, he manages tens of experts & architects, involved in complex transformation and integration of IT infrastructures in Grand-Duchy of Luxembourg. In parallel, as an Expert leader, Kamel animates Desktop & Unified Communication PMP through events, workshops, articles in newspapers, social media, etc. A native of Nice (french riviera), Kamel previously worked as Developer, System Administrator, Support Engineer, Project Leader, Bid Manager and then Delivery Manager. Hired at Sogeti Luxembourg in 2008, he passed through several in-house activities: Consulting, Bidding, Team Leading and now sharing knowledge as a domain Expert.

Posted in: 3D printing, Cloud, Infrastructure, Innovation, Internet of Things, IT strategy, SogetiLabs, User Experience, User Interface, Wearable technology      
Comments: 0
Tags: , , , , , , , , ,


10thnov-1My former blog was about the importance of Rocket projects to support the digital transformation of an organization. Rocket projects help to make customers and employees see and feel that an organization is heading for a new identity. But what if that identity is not clear? What if an organization doesn’t have a clear view on their role in the ever changing digital era?

I have been speaking at a lot at different organizations lately about this and it occurs to me that many organizations do not have a clear vision or lack awareness.

Defining the “New Us”is indeed difficult since it is about the future. And it is often outside the comfort zone. Imagine you are a bank, how would you deal with crowdfunding, bitcoin and blockchain? Or as a telecom organization, how to deal with Internet of Things? Or as a logistics organization, how to deal with 3D-printing? If you are a retailer, how to deal with the share economy? If you are a university, how to deal with Massive Open Online Courses (MOOC)?

Getting inspired by the new world is easy, making choices is not.

As a business analyst,  I see the following three instruments as very helpful in identifying who you want to be as an organization in the digital era:

10thnov-2Business Model Canvas
A business Model Canvas is not new but still an extremely helpful instrument for determining future value propositions and business models. Since new technology tends to have impact on organizational boundaries and business models, the value of the Business Model Canvas is bigger than ever.

Value Chains and Transactions
When determining the “New Us”, it is not enough to just look at your own organization. It is the complete value chain which is open for disruption. Internet of Things for example, literally connects organizations which were not connected before. Making this visible in terms of value chains and transactions helps to understand the changing value chains and determine the position of your organization within it.

Business Story Telling

Business Model Canvas and Value Chains are about the hard facts but Business Story Telling makes the10thnov-3“New Us” human. And that is what it’s all about in the end. Business stories can be used to find out the “New Us”. What is the strength of our organization? What is it that we are proud of? What is it that unites us?

Furthermore, it can be used to communicate the “New Us” within the organization. Good business stories tend to have much more impact on people than a carefully formulated vision statement.  I am convinced that the strength of these instruments is in using them in combination. Together, they can help to determine who you want to be in the digital era and make sure people are aware.


Andre Helderman AUTHOR: Andre Helderman
André Helderman has studied both Business Information Technology and Organizational Sociology which makes clear that he is interested in the impact of technology on human behaviour.

Posted in: 3D printing, architecture, Business Intelligence, Digital strategy, Internet of Things, SogetiLabs, User Experience      
Comments: 0
Tags: , , , , , , ,