The topic of innovation is critical in this age of digital disruption and transformation. How do companies and organizations plan for and deliver innovation, which often seems to be an illusive mirage? The answer is to include innovation in the strategic executive agenda and invest long term in an Innovation Lab.

image courtesy:

An Innovation Lab, if setup correctly, enables rapid and business value focused ideation and prototyping. When connected, end-to-end, from the origin of the opportunity/problem to final production, time-to-market is improved, speed-to-volume is accelerated, and more importantly makes it possible to bring that which is radically different to the market.

The term “radically different” is critical in an Innovation Lab-context. Business-as-usual development outside of the Innovation Lab needs to stay relevant and innovative. Innovation is not something that can or should be excluded anywhere, even if an Innovation Lab is implemented. However, an Innovation Lab must be challenged with the primary task: to design and bring to market that which is radically different, since this factor is what is at the heart of disruption and transformation. image courtesy:


That said, here are my top 10 Innovation Lab Best Practices:

Define and publish the Innovation Lab charter
The purpose of a charter is to clearly and concisely explain what the Innovation Lab should do and how. The charter could be as short as:

“XYZ Labs delivers innovation to radically improve customer experience and accelerate business operations. The lab lives in an open ecosystem of idea generating forums and prototypes in a failing-fast spirit.”

Linked to the charter, you need to define the Innovation Lab’s governance model, process, location, and budget. Publish the charter. Make it known. Be proud of it.

  1. Ask the right questions
image courtesy: google imagesimage courtesy: google images 

An Innovation Lab often tries to do the seemingly impossible: creating a structured approach to what seems to live by randomness and ad-hoc-opportunities. However, by asking the right questions you command the attention to the right topics. For example:

“How can we radically improve customer experiences in sales and delivery processes?
How can we delight the customer in unexpectedly helpful ways?
What are the most critical customer problems?”

  1. Define the characteristics of a great idea

Guiding innovation principles and desired digital characteristics are not about limiting the scope of relevant ideas or about detailing specific ideas. Instead, they serve as helpful markers of the selection process. When these markers are known in advance, the relevance of gathered ideas increase.

Examples of characteristics: contextually smart, real-time connections between customer events and internal operations, radically innovative, differentiating, connects customers/users with each other, omnichannel enabled, unexpectedly digital, enabling third party companies to add value to customers/users, etc.

  1. Implement an open ideation process and platform

Great ideas come from everywhere; from both external and internal sources. Limit ideation sources as little as possible. Engage in a planned and structured way with different sources. Implement a capable ideation solution platform from where ideas can be discussed, rated, and fed into the Innovation Lab. Include idea generation at every digital touch-point, for example through “I have an idea!” widgets.

  1. Engage proactively with critical ideation groups

Identify key persons in VIP customer/user groups, and interact frequently with them specifically on the topic of innovation, and give special attention to their ideas.

  1. Observe the customer/user

Nothing beats leaving the office and workshop facilities, and going to where the customer/user physically uses your products and services. This is especially true in app and IoT ideation. Observe and interact, with both the customer/user as well as employees at/near the location. The more time you can spend where the actual challenges and opportunities exist, the better you’ll understand what innovation can improve.

  1. Select the right ideas to prototype

The single most important aspect of the Innovation Lab process is most likely the step from ideation to prototyping. Why is one idea considered good and another not as good? Why is one idea selected to move forward to prototyping, and another idea is not? As soon as this step is considered ad-hoc, uncontrolled, or prone to purely personal subjective preference, the legitimacy of the Innovation Lab can be questioned. Even though some innovative ideas don’t fit into seemingly simplistic molds of criteria, most actually do.
The answer is: filter ideas through desired characteristics, customer/user desires, technical feasibility, and business value, and keep an updated and visible backlog/roadmap for the Innovation Lab.

  1. Smart prototyping using MVP, FF, and Beta

Create prototypes using correctly scoped “Minimum Viable Product” features and fail-fast!
Don’t bet the farm on one plant. Many Innovation Lab ideas fail because it’s entire bandwidth is spent on one single idea. Plan to run multiple idea prototypes simultaneously. This forces you to adopt a “Minimum Viable Product”-scoping, i.e. what can you take out from the idea to actually try it. Mind the common pitfall in cutting scope: don’t keep most of of the functional scope and cut most of the user experience/visual design! Instead, keep just the most critical functional aspects of the idea, and wrap those in as much as visual glory as possible.
“Fail-Fast” and stay away from destructive pride. A prototype shouldn’t typically take more than four weeks to design and develop. With the hardware challenges in IoT, allow another four weeks, but consider two months for a prototype to be developed, a very long time.
Unless some or even most prototypes fail, for whatever reason, you are not trying hard enough to perform as an Innovation Lab. Therefore, make sure failing is expected in and around the lab, and execute prototypes in a failing-fast spirit. Avoid getting stuck on one idea for too long, especially if it is untried with customers/users. The key is to be patient that same idea may re-appear in other lab contexts.

Make room for prototypes in your production systems. For example, create a place on your site or in your apps to hold “Beta” features. You can have some beta features available for everyone and if you support customer/user accounts, you can even enable some beta features just for some select groups of customers/users.

  1. Leverage the creative nature open innovation ecosystems

Enable third party groups to prototype their own ideas by supplying sandboxed environments with as complete APIs as possible. You’d be surprised how many are intrigued by the creative nature of innovation. Expect partners, startups, students, and other third-party groups to be interested in innovating in and around your traditional value chain. Invite them to open hackathons and innovation workshops.

  1. Take feedback seriously

Prototypes are meant to be tried and tested, not just by Innovation Lab staff, but by customers/users. Be sure to take their feedback seriously. Be prepared to iterate through a handful versions of each prototype, before deciding whether or not to introduce the solution into production.

fastcodesign.comimage courtesy: 

Finally, the handover from the Innovation Lab to production is critical. Be pragmatic on resourcing in the handover phase. Most likely Innovation Lab staff will need to participate in the first few steps of infusion of the solution into that which is already in production.

If you need assistance in setting up or revitalizing your Innovation Lab, don’t hesitate to contact me. We recently launched a new initiative called “Applied Innovation Exchange” with a robust Innovation Lab – service catalog (and an exciting San Francisco venue to meet at). Together with my Capgemini and SogetiLabs colleagues, I’d love to interact with you on that which is radically different.


Andreas Sjostrom AUTHOR: Andreas Sjostrom
Andreas Sjöström is Sogeti’s Global Mobility Practice Lead. He’s sincerely passionate about new business enabling technology. For more than ten years, his focus has been related to mobile solutions and apps for business. Andreas is a senior adviser to multiple international companies, co-author of “The App Effect”

Posted in: beta testing, Capgemini Group, communication, component testing, Digital, High Tech, Innovation, Internet of Things, IT strategy, Open Innovation, Quality Assurance, Rapid Application Development, Research, Software Development, SogetiLabs, test data management, User Experience      
Comments: 0
Tags: , , , , , , , , , , , , , , , , ,


Do you pride yourself on your knowledge of modern application development?  I always did and kept my skills current, until recently, when I discovered that I was way behind.

Platform-As-A-Service (PaaS) is a significantly different approach to custom application development.  It kicks n-tier to the curb and I love it. Traditionally, applications were built in three tiers: UI, Business logic and Database.

image-1It was all custom code with may be some widgets or libraries for complex functionality.  The entire application was usually expected to run on a single server and we developed the system in a slow, waterfall approach.

Recently, SOA (service-oriented architecture) showed us how to encapsulate business critical functionality into services.  This enabled an application to be distributed across several services and orchestrated with special software.  Using agile methods, developers had access to pre-built objects specific to the business domain and users gained early visibility to the application.

The entire application is now a series of services.  Developers pick and choose the modules they need to achieve a specific goal.  You don’t know where your application is running. There are many business benefits to this approach:

  • Pace of innovation is accelerated
    Thanks to on-demand, cloud models; we can instantly provision and connect to services. (I saw a developer create a full sentiment analysis application in less than an hour using PaaS.)
  • Lifecycle Management is completely different
    The application can and will change as new services are made available by vendors.
  • Devops - image - 0924Image courtesy: 

    DevOps models
    DevOps models simplify when the platform is cloud-based; there is less code to deploy and no servers to maintain, patch, and support.

  • Quality
    The quality improves when most of the code is reusable services assembled with minimal customization.  Testers can focus on scenarios and business risk.

PaaS has changed my world and hope it proves to be effective in creating a strong impact in your knowledge about modern application development.

Feel free to share your challenges and experiences with me in the comment box below.

Related Posts:

  1. Is Cloud a return to the Stone Age?
  2. Our top 10 2014: Is Cloud a return to the Stone Age?
  3. At your service…
  4. It’s the platform, stupid

Sogeti Labs AUTHOR: Sogeti Labs
SogetiLabs gathers distinguished technology leaders from around the Sogeti world. It is an initiative explaining not how IT works, but what IT means for business.

Posted in: Application Lifecycle Management, architecture, Business Intelligence, Cloud, Data structure, DevOps, Innovation, Open Innovation, Research, Software Development, User Experience, User Interface      
Comments: 0
Tags: , , , , , , , , , ,


(This article is a review and summary of different white papers, books and courses referenced in the bibliography section, focusing on business models and some examples of them.)

The Internet of Things (IoT) represents the vision that every object and location in the physical world can become part of the Internet: Objects and locations are generally equipped with network, storage and computing capacities and so they become smart objects that can take in information about their environment and communicate with the Internet and other smart objects. So, smart things are hybrids, composed of  elements from both the physical and digital worlds (see fig 1. from [1])

Image 1

Figure 1 : Value‐creation Layers in an Internet of Things Application (in [1] Fleisch, E., Weinberger, M., Wortmann, F)

The key layer connecting both worlds is the connectivity layer. The physical world is not just about hardware, but also more and more software nowadays (think cars). However, a physical thing becomes “smart” when it connects to the digital world. The layers 2,3 and 4 allows us to invent and propose to individuals (customers but also citizens) new services (digital services of layer 5). One important fact is that layers 1 through 5 cannot be created independently of each other. That is why the arrows connecting them are bi‐directional in fig 1. An IoT solution with value is usually not the simple addition of layers, but rather, an integration extending into the physical level. How the hardware is built, for instance, is increasingly influenced by the subsequent digital levels and on the other hand, software which compose the digital levels and must be designed to fit the physical levels.

The connection between the two worlds allows new business models to emerge across manufacturing industries (i.e. industry that designs and builds physical world products). Here are some examples:

  • Canary ( or Netatmo ( companies offer a smart home alarm system that includes a variety of sensors, from temperature or movement sensors, up to a camera. The basic function of monitoring rooms during the resident’s absence and sending a message to a smart phone app in the event of anomalies, is included free of charge in the price of the system. Other services are proposed with additional cost. It is the physical ‘freemium’ business model.
  • You have a car and you want to travel abroad, where not all risks are covered by your insurance. In this case, if your automobile’s performance can be configured using software and the vehicle is a node on the Internet, you can purchase from third party the right additional mini-insurance policy just for this journey. It is the Digital add-on business model.
  • By pointing a smart phone at a product, an Internet website opens where that same product – including replacement parts, accessories, and consumables – can be purchased. Amazon already offers this type of service for products with a bar code that are carried by the e-tailer. In this case, the product itself becomes a point-of-sale. It is the “product as point-of-sale” business model.
  • Your heating system orders itself oil refills as soon as a certain level of liquid is noted in the oil tank. You have nothing to do! The Thing has the ability to independently place orders on the Internet. The idea of self‐service no longer refers only to the customer; now things can serve themselves too. It is the Object self-service business model.
  • Brother (, offers leases for laser printers, for example, without any base leasing rate – only the pages that are actually printed are invoiced. In the past, the technology required to monitor remote usage was complicated and relatively expensive but as the IoT expands itself quickly, the costs required diminish, making this pay-per-use model very effective and attractive for customers. It is the remote usage and monitoring business model.
  • Another powerful idea is collecting, processing, and selling for a fee the sensor data from one sub-section to other sub-sections of the fog and Cloud computing infrastructure and ecosystem. The measurement data from the physical world are no longer vertically integrated, collected, stored and processed for just one application but instead for a broad array of potential applications. In this Sensor as a Service business model vision, value is created by making sense of data, not from the physical product itself, which will be usually inexpensive (at least compared with existing solutions until now). Fig.2, from the 2014 Vision Mobile report (reference [4]) illustrate this point.

Image 2

Figure 2:Value in IoT is created by making sense of data (report: Breaking free from internet and things – vision mobile 2014 – CC-BY-ND)

Implementing these business models is not trivial. Issues arise from the key different characteristic of the physical world (i.e. product) and digital world (i.e. service)

  • First, services differ fundamentally from products because they cannot be stored and as a rule they are provided at the customer’s site while collaborating (there are some interactions human or IT related in the delivery of the service), and they are generally paid for, in many smaller amounts, spread out over time.
  • Second, the difference between physical and digital products are particularly noticeable in product development. In the digital world, agile development processes are the norm today. When a bug can be repaired with an update at almost no cost, even in an installed base counting millions of instances, speed, early customer contact, and aesthetics are of utmost importance in development. In the hardware business, however, and in the world of embedded computing as well (i.e. mix of hardware and firmware/software), an error in a product that has already been sold usually results in an extremely costly, image‐damaging recall action (remember car recall action because of mechanics or software defects)

However, nowadays, digitalization of hardware function is becoming more and more important and advanced (see Software defined X technologies in IT infrastructure –e.g. Software defined Network, Software defined storage, etc…) and this trend will for sure help to solve the gap between physical and digital cultures.

Traditional business models put emphasis on products, technology and specific verticals. That makes sense when customers are known and their needs are well understood. The IoT is different. The demand for billions of connected devices will come from people using services and apps that make sense of the data that those devices generate.

The future leaders of the IoT will win by building community of entrepreneurs around of their products and services, which means put in place technology community platform (i.e. a system that can be adapted to countless needs and niches, see Marc Andreessen – founder of Mosaic and Netscape – definition of a platform). To be successful you will have to master both technology and the human art of managing ecosystems.

It is the key success factor to build the new business models fostered by the IoT.


[1] Fleisch, E., Weinberger, M., Wortmann, F.: Business Models and the Internet of Things,

Whitepaper of the Bosch Internet of Things and Services Lab, a Cooperation of HSG and

Bosch (2014).

[2] Fischer et al. (2014); Fischer, Thomas; Gebauer, Heiko; Fleisch, Elgar: Service Business Development:

Strategies for Value Creation in Manufacturing Firms, Cambridge University Press, 2014

[3] Fleisch 2010: What is the Internet of Things? An Economic Perspective, Auto‐ID Labs White Paper WPBIZAPP‐053, ETZ Zürich & University of St. Gallen, January 2010,

[4] Schuermans &Vakulenko (2014); Schuermans, Stijn; Vakulenko, Michael: IOT: Breaking Free From Internet and Things, How communities and data will shape the future of IoT in way’s we can’t imagine.

VisionMobile Report, 2014

[5] Fog Computing and Its Role in the Internet of Things, Flavio Bonomi, Rodolfo Milito, Jiang Zhu, Sateesh Addepalli, Cisco Systems Inc, 2012

[6] The Fog Computing Paradigm: Scenarios and Security Issues; Ivan Stojmenovic, Sheng Wen; Proceedings of the 2014 Federated Conference on Computer Science and Information Systems; ACSIS Vol.2, 2014

[7] Fog Networks and the Internet of Things; MooC learning; University of Princeton; Mung Chiang and al., 2015;

Apple Pay is a pure-play NFC-payment system. It uses the Near Field Communication chip in specific Apple devices to make a contactless payment. Technically it will work in stores that accept contactless card payments, although it will also require the vendor to have signed up.

The Touch ID fingerprint scanner is key to the whole thing, but you will also need a specific NFC antenna that is built into certain Apple devices.

If the shop you’re in supports Apple Pay, they will have a little sensor by the till. You put your iPhone on the sensor, put your finger on the Touch ID fingerprint scanner to identify yourself, and that’s it. There are NFC antennae in the iPhone 6 and iPhone 6 Plus, but not in any earlier iPhones. There are also NFC chips in the iPad Air 2 and iPad mini 3, but they appear to be deactivated for the time being; no iPad is able to use the full, in-store version of Apple Pay.

The Samsung Galaxy S6 and S6 Edge both have NFC for use with Samsung Pay, but there’s a key technical difference between Samsung Pay and Apple Pay. Rather than limiting transactions exclusively to NFC, Samsung Pay will also work with magnetic strip readers – a much older technology which is available in virtually every shop that accepts card payments. Samsung is doing this with a new proprietary technology called Magnetic Secure Transmission, or MST for short.

The key to leading the nascent mobile payment market lay in how to guarantee “wider usage,” and Samsung’s use of magnetic secure transmission (MST) technology would give the company a significant advantage over its two main rivals, Apple Pay and Android Pay.

MST enables users to make payments in places where traditional magnetic-strip credit cards are accepted. Samsung’s two main rivals have yet to use the technology.

MST technology would give Samsung momentum against Apple Pay and Android Pay, both of which used near field communication (NFC) technology, which is not widely used in everydaylife.

Security is a key issue determining the success or failure of mobile payment business, Samsung Pay uses tokens and fingerprints to guarantee the same level of security as its rivals.”

“Samsung Pay offers more comprehensive options by using both NFC and MST technology,” Lee said.

Samsung Pay vs Apple Pay: verdict

Samsung Electronics’ mobile payment system does have the edge as Samsung Pay is expected to dominate the mobile payment market because the system is the most compatible with merchants’ terminals, because it utilises Magnetic Secure Transmission.

At launch, Samsung Pay is only available in the company’s home country Korea. But it will expand to the US on 28 September 2015, and the firm has indicated that the UK, Spain and China will be next to get the facility sometime in the near-future.

That signals a more aggressive rollout than Apple Pay, which currently remains limited to the US and UK.

The big question for users is Samsung Pay or Apple Pay? It’s unlikely that many people will be swayed from Apple’s iOS ecosystem solely because of the type of mobile payments Samsung supports.

Perhaps the more important comparison is with Android Pay – Google’s forthcoming mobile wallet service.

Google is also pitching its service as simple to use because it doesn’t need a special app to be launched. But it will require payment terminals to offer NFC support.

AUTHOR: Sogeti UK Marketing team

Posted in: Business Intelligence, communication, component testing, Developers, e-Commerce, functional testing, Human Interaction Testing, Innovation, IT Security, IT strategy, Marketing, mobile applications, mobile testing, Open Innovation, Rapid Application Development, Research, Security, User Experience, User Interface      
Comments: 0
Tags: , , , , , , , , , , , , , , , , , ,


What is Stagefright?

Stagefright is an exploit that capitalises on vulnerabilities within the software that Google’s Android OS uses to process, play and record multimedia files.

The vulnerability can be initiated through the sending of a simple picture message, and it can also make its way onto a device simply by landing on a webpage containing affected embedded video content.

Once an Android device has been infected by Stagefright, a hacker can remotely access the device’s microphone, camera, and external storage. In some cases, they can even gain root access to the device.

Which phones are affected?

All Android devices are susceptible to the Stagefright exploit. It affects phones running on Android 2.2 and above, which means pretty much every Android phone still in operation today.

It was estimated that roughly 95 percent of Android phones in use today are at risk, this could equate to 950 million vulnerable Android devices around the world.

Motorola, too, has revealed that it will be updating its entire modern range (so effectively from the first generation Moto X era onwards), with carriers set to receive the files for testing from August 10 2015.

What can you do right now?

A device will remain vulnerable from Stagefright exploits until it receives Google’s patches for these vulnerabilities.  Android devices other than Nexus devices will ultimately need to get these patches through a Google partner (either a device manufacturer or wireless carrier

Unfortunately, security patches delivered by Google’s partners can take weeks and even months to fully deploy.  To check if a patch is available for most Android devices, go to Settings and click System Updates. In the meantime, Android users waiting on Stagefright security patches can take additional steps on their device to protect themselves

Just download the free Stagefright Detector App by Zimperium (the security company Joshua Drake works for) from the Google Play Store.


Once you’ve run the test (it takes seconds), you’ll probably discover that your phone is vulnerable, with a message like the one above. Note: this doesn’t mean that you phone has been infected, so don’t panic!

If your phone is vulnerable, and you use Hangouts as your main messaging app, go into Settings>SMS and disable the ‘Auto Retrieve MMS’ option.

If you use a different messaging app for such things, double check that no such similar option exists or has been selected. Samsung’s default Messages app also automatically retrieves MMS messages by default, for example.

Regardless of your chosen messaging app, don’t open any picture messages from unknown sources. Delete them straight away, without entering them.

If you have an ancient Android phone – say, three years old or older – it might be time to consider upgrading. It’s highly unlikely that your phone will receive a Stagefright fix.

How to protect your Android device from Stagefright exploitation

Usefull links:

  1. Update your device: Keep your device updated to the latest version at all times. If an update is not available for your device, manually install an OS like CyanogenMod that support older devices for a longer period of time.
  2. Disable Auto-fetching of MMS: You will need to disable this for both Hangout and regular messaging apps. Here’s how:
    • Open Hangout

Tap Options on the top left corner

AUTHOR: Sogeti UK Marketing team

Posted in: communication, Developers, Digital strategy, High Tech, Human Interaction Testing, Innovation, IT Security, mobile applications, mobile testing, Security, Software Development, User Experience, User Interface      
Comments: 0
Tags: , , , , , , , , , ,


Image 1What if Donald Trump were to buy the worlds first real ‘strong’ AI? What would he do with a thinking computer that would out-think everybody else? This was a question that crossed my mind when thinking about the ethics of AI. Or, along the same lines, what if Bill Gates (you know, from the Gates foundation) were to own one? Or the government of China? Or the Vatican? Different owners would surely lead to different scenarios of how this AI would be used and different outcomes for mankind.

There is a lot of discussion already about the dangers and opportunities of AI. Elon Musk and Stephen Hawking are among the famous people warning for what could go wrong. Automatic autonomous weapons, killing anybody who fits a certain description? It might just be possible for anyone to build in a few years time.

Or perhaps – hopefully – a real strong AI would quickly realize that to achieve the goals of any owner, progress for whole mankind would be best? Regardless if you pursue world peace, great fortune or to become the world’s leader, it probably helps if people are happy, healthy and productive. Although, another view says that any computer would quickly realize that the greatest threat to it’s own operation would be for someone to turn it off, and you don’t need to have a super-brain to reason this scenario through. (“I’m sorry Dave, I’m afraid I can’t do that”)

On a more serious note, if you’re interested in the ethics of AI I can strongly recommend this video by Nick Bostrom, who also wrote a book on the same topic: Superintelligence.


Erik van Ommeren AUTHOR: Erik van Ommeren
Erik van Ommeren is responsible for VINT, the international research institute of Sogeti, in the USA. He is an IT strategist and senior analyst with a broad background in IT, Enterprise Architecture and Executive Management. Part of his time is spent advising organizations on innovation, transformational projects and architectural processes.

Posted in: Developers, Human Interaction Testing, Innovation, IT Security, Open Innovation, Requirements, Research, Robotics, Social media, Social media analytics, Software Development      
Comments: 0
Tags: , , , , , , , ,


Last winter, when I had a ski accident and broke my knee ligaments, my 6-year-old son said to me, “Don’t worry, they’ll fix it and you’ll become a robot”. What impressed me in that sentence was that, in his mind, the meaning of breaking things or disability was already changed: in some years, we will have to find a new term in the post-disability age.
Maxènce is french boy, born without a hand and that has been equipped with the first 3d printed hand in the world. The innovative thing is that the hand for “Super Max” is not a medical support that requires surgery, but a wearable device printed by e-NABLE(To Give The World A “Helping Hand!) association, created by volunteers passionate about 3d printing and able to provide innovative solutions at a low price. On his first day of school, he was “the hero”, the one that other kids admired, because of his new hand: the concept of disability suddenly took on a refreshed, new meaning.
The Chairless Chair is an exoskeleton that allows workers to sit without straining their muscles.


The change is already here, when such solutions are created not only to help people with disability, but also to help “normal” people during their work in difficult situations. The Swiss startup Noone has created an exoskeleton that acts as a chair and helps Audi’s employees during their working day, where they have to stand in difficult and uncomfortable positions. This chairless chair literally hugs the employee’s legs and sustains him/her whenever the standing position becomes difficult. The possible audience for such wearables is large and its impact on productivity, compared to the fatigue of difficult positions, makes the ROI impact very interesting.


Hugh Herr is a bionic designer and also a mountain climber, whot lost both his legs in an accident. In his TED talks video, he said that “We the people need not accept our limitation, but can transcend disability through technological innovation” .

Hugh HerrHugh Herr 

There is a large potential in this sector and recent studies show that the exoskeleton market share willreach $2.1 billion by 2021 and we will have an exoskeleton market besides the therapeutic robotic market. People that could not walk or had a major disability, today have the hope that moving about will be much more easier to achieve and I believe that this is just the beginning.

What makes me dream about all these innovations is that the only limit is the sky and we can go beyond ourselves. The last example that I would like to cite is less linked to robots but still interesting. It is about what Dr. Tarek Loubani made in order to help Palestinian patients. This Canadian doctor saw that stethoscopes were missing in Gaza because of the ongoing war between Hamas and Israel and he “just” found the solution. The cost of his 3d printed stethoscopes is $2.5 and it is just as good as costlier models, doctors say.
Innovation is already here but, the most funny thing is that we are the innovation.


Manuel Conti AUTHOR: Manuel Conti
Manuel Conti has been at Sogeti since 2010. With a technical background he leads onshore and offshore teams mainly in the use of the Microsoft SharePoint platform, building intranet and internet web sites.

Posted in: 3D printing, component testing, Digital, Innovation, Open Innovation, Research, Robotics, Social Aspects, Software Development, Technical Testing, Test Driven Development, Test Plans, User Experience, User Interface, user stories, Wearable technology, wearable technoloy      
Comments: 0
Tags: , , , , , , , , , , , , , , ,


Image 1“We can’t share anything on public internet before completing security audit.”

“All changes made to an environment must be approved by managers.”

“Applications can be updated only during predefined times.”

Familiar statements … right? You might be wondering why I mentioned testing explicitly at the title. In fact, testing is implicitly included in all steps. Security audit is also just one kind of testing. Tester(s) are quite often the ones who get the blame if applications can’t go live due to the detection of bugs. So it’s preventing “everything nice”.

Luckily, there are some companies that think differently. In those companies it’s enough that the code is in version control and automated tests pass to install new version to production. Without any human intervention.  Also improvements or fixes for the application can be made quicker. It’s enough to notice something that needs to be changed (improvement, error) in the application and tell that to product team. The feature can be fixed or implemented to production much faster than most people can expect.

So, the good part is that we clearly see quick fixes and new features. This can be also seen as a problem. So first thing to do is to trust each other. E.g. that developers do not make mindless implementations, and are able to fix things successfully. And also that business people can make decisions, without getting approvals from some steering groups or committees.

So, in a nutshell:

  1. Trust people and let them do things that they’re good at
  2. Trust that people are doing things the correct way and verifying that the results of their work, works.
  3. Help people by removing obstacles instead of creating those.
  4. Use time for doing things, not for building (manual) safety nets.

All this is part of DevOps, so why not to try that yourselves?


Juho Saarinen AUTHOR: Juho Saarinen
Juho Saarinen joined Capgemini Group, more specifically Capgemini Finland, at the end of 2007 as an analyst tester. He was moved to Sogeti Finland when it was established at 2010, and has advanced from Analyst to Manager responsible of testing tools, test automation and agile portfolio.

Posted in: Automation Testing, Developers, DevOps, Security, Testing and innovation      
Comments: 0
Tags: , , , , , , , ,


Image 1You just inherited someone else’s code and you feel like the task is going to be impossible. You are overwhelmed by the lack of documentation, you also find out that there are no tests in place and you wonder how on earth you will make the requested changes without breaking things. Yes you are scared!

I’ll try to help you with these 4 guidelines:

You will come out stronger from this experience.

As humans, we find ourselves in this never-ending learning journey… take a moment for yourself, calm down and be sure that you’ll get something in return after you conquer the inherited code or application.

Perhaps, you don’t know anything about the business the application is intended for. Grasp anything you can and expand your knowledge base. You never know when those key things you learnt will come up handy in another situation, project or even landing your dream job.

Maybe it’s your lucky day and the code, you just inherited, was written by a more experienced developer, and therefore you’ll learn new ways of doing things … yes, I know … sometimes it can be difficult, frustrating, and confusing, but believe me, once you finish the job you’ll have more things in your toolbox so when the next project knocks on your door, you’ll have different ways to approach it.

In the worst case scenario, you’ll find yourself trapped in the middle of some serious spaghetti code … in such a case you’ll learn that you have a responsibility to those who follow, because you don’t want anybody complaining about your code, or do you?

Understand The Big Picture.

You’ve just become responsible for a code base, you know nothing about. So where do you start? Let’s remember that Agile, SCRUM, PMP ITIL, etc… all begin with some sort of planning or assessment as described in the Deming PDCA cycle.

You need a plan, you need to know where you and the current code are standing. Don’t jump into those first keystrokes before making your point that is important to step back for a moment to get a glimpse of the project, an overview of what the software is supposed to do. Ask for any available documentation, functional requirements, user stories, anything that stakeholders may have that will help you plan where and how to start.

Don’t act like a robot, ask what the expectations are and document your findings. Once you have a good understanding of the big picture, you are a step closer to working your way through the code.

Don’t start over.

I know that feeling. You look at the screen, you curse in any language you know and the first thing that comes to your mind is: I’ll start from scratch.

For instance, in this stage you probably can’t know all about the business rules or knowledge embedded in the code base and therefore you cannot be sure your “new” software will work or perform better than the first.

Unless you are making a major architecture or technological change, work with the legacy code, as long as you can. Fix whatever you find is not working as expected, and just start to think about rewriting when you have tests in place to assure that you’ll not break existing functionality.

Debug it and Test it!

Master the art of debugging. Go step by step through that code you don’t understand. Use breakpoints, watch variables and learn what the code is intended for. Once you get the picture, of what the code is doing and what it is supposed to do, you are clear to start with your changes.

“… Even if your code was written yesterday, if there are no unit tests to characterize or define the behavior of the code, then the code is legacy, and will be difficult to change without introducing bugs…”

Do create tests for your code. Sometimes you’ll find it difficult because of many types of constraints (i.e. time, budget) but make your case, remember you don’t want anybody reading this article because they just inherited your code!

Hope it helps!


Carlos Mendible AUTHOR: Carlos Mendible
Carlos Mendible has been .Net Architect for Sogeti Spain since 2012. He is Head Solutions Architect in, one of our major clients, A3 Media, and is also responsible for assisting with the sales process as Pre-Sales Engineer and for conducting workshops and training sessions.

Posted in: Agile, Application Lifecycle Management, Business Intelligence, functional testing, Innovation, Quality Assurance, Robotics, Scrum, Software Development, Software testing, Test environment      
Comments: 0
Tags: , , , , , , , , , , ,


Like it or not, we live in a world where expectations are high: development teams are asked to get more done, in less time, in the face of moving targets, last-minute design changes, and complex, difficult-to-understand systems.

And those are the easy engagements.

In managing software development projects for well over two decades (yes, I’m that old — you can check my LinkedIn profile) there’s one specific problem for which many solutions have been proposed — managerial, organizational, governance — and which seems to elude most approaches and slow-down most development teams.

The canonical example is a project, where the team is tasked to put up a new website, one that includes dynamic content display, based on data accessed and delivered by back-end services.  If all of these components (or even just some of them) must be developed from scratch, the typical proposed project plan looks like this:

But, given the reality of simultaneously developing a UI layer, a services layer and a data access layer, the schedule soon becomes this:

Now, no development team, no matter how professional, experienced and well-managed, can hope to avoid the fundamental issue that creates this “integration chaos” without understanding just why this happens.

In a nutshell:

When a project has multiple layers, and one layer depends on one or more other layers to be functional before the stakeholders can see the result in action, a framework or process is needed to insure each layer can continue its development, testing and stakeholder review uncoupled from other layers. That uncoupling should be done in a manner that permits — at all times — checking of the correctness of the developed layer in the face of a wide range of behaviors from the replaced layer.

A pretty obvious statement, it would seem.  But the devil is in the details of the framework/process chosen to uncouple these efforts.

One framework, likely in use since modern software development began, is “stubbing” or “mocking”.

Stubbing and mocking are intended to be very context-specific, short-term solutions to missing or hard-to-access software components, usually early in a development cycle.

Stubs provide replacement implementations for objects, methods, or functions in order to remove external dependencies. Stubs are typically used during unit and component testing for two main purposes:

  1. To isolate the code under test from the integrated environment.
  2. To enable testing to proceed when it is not possible to access an external resource or problematic method/function/object.

If you’re trying to write a unit test and need to replace a simple call to a database, external libraries (e.g., file I/O) or other system API, stubbing might be perfectly suited for your needs.

[…] stubs/mocks are typically used to “skip” unavailable system components… —

While there are situations in which stubbing/mocking can help with development tasks in the face of missing dependencies, such a solution is exposed to the same issue as any such solution in a fast-moving environment where changes can happen unexpectedly.  Stubs are typically throw-away placeholders, designed with only enough fidelity to the missing dependency to represent a limited and short-term solution.

However, with the advent of large development teams, the widespread use of SOA (including the user of third-party services), and the rise of Agile methodologies, stubbing and mocking became less valuable to the development process.

Service virtualization, on the other hand, is generally not context-specific and can be used throughout the project cycle, including production, by all developers.

In software engineering, service virtualization is a method to emulate the behavior of specific components in heterogeneous component-based applications such as API-driven applications, cloud-based applications and service-oriented architectures. It is used to provide software development and QA/testing teams access to dependent system components that are needed to exercise an application under test (AUT), but are unavailable or difficult-to-access for development and testing purposes. — Wikipedia

Service virtualization may appear, at first, to be not much different from stubbing or mocking.  Both classes of solution allow development and testing of software components in the absence of dependent components.

Where they differ, however, is in:

  1. How they do their work.
  2. Their flexibility in responding to changes in design.
  3. Their ability to mimic the missing dependency with high fidelity.

Service virtualization, in its most common use, replaces missing dependencies at well-defined and well-documented interfaces that are:

  1. Defined, at least partly if not fully, early in the development cycle.  Web services APIs are a good example of this.
  2. Well documented, using appropriate approaches, so as to be clear and unambiguous to the developers on “both sides” of the interface.  WSDLs and XSDs are good examples of this.
  3. Encapsulate logic for the missing dependency that can be easily modified in response to design changes.  This is probably the most difficult part of creating and using service virtualization, but tools exist to make this somewhat easier.

The bottom line is, no matter how a service is virtualized, this should always be true:

…as far as an application is concerned, those responses issued by virtual services are as good as the real thing — What is Service Virtualization?,

As a result of this creation of a replacement for a missing dependency, and one which is of high fidelity, indistinguishable from the real service:

  1. Development of a software component can proceed without waiting for a dependent component to progress far enough to “come alive”–for instance, a UI layer can be running and available for feedback before web services are fully ready.
  2. The virtualized service can serve as a “sanity check” on architecture and design decisions — errors can be corrected before they propagate throughout multiple layers.
  3. The service implementation may serve as a basis for documentation of the system behavior and as a baseline for proper operation of the real service.

How does one accomplish implementation of a virtual service?  A full exposition would be rather long (and perhaps the subject of another posting), but the general answer is pretty simple.

There are two common ways of setting up a virtualized service.

  1. Start with the contract (e.g. a WSDL or other protocol-specific descriptor) and create predetermined responses. This can be done manually using normal Java or .NET code, or you can use a commercial product. An advantage of this method is that the team writing the component doesn’t have to wait for the real version of the service to be completed. The downside is that the real version needs to actually match the fake one, which becomes a questionable proposition as timelines become longer.
  2. Use traffic recordings. A tool sits between the component under test and its down-stream dependencies. This tool basically acts as a proxy, gathering information about how the components interact. Later those recordings can be used to simulate the conversation between the component and the servers it depends on.

Most COT frameworks support all four techniques to some extent:

  1. CA LISA Pathfinder.
  2. IBM Rational Test Virtualization Server.
  3. Parasoft Virtualize.
  4. HP Service Virtualization.
  5. Many open-source toolsets (e.g., WireMock and Betamax, though these are really mocking tools).

In addition, where warranted:

  1. Bespoke implementations.

The choice of toolset often depends upon the client’s existing vendor relationships, knowledge and supporting infrastructure.

An excellent checklist of service virtualization tool attributes can be found at:

Related posts: