BonnetDidier Bonnet from Capgemini Consulting was very clear: he has never seen a company successfully transform into a digital master without clear top-down vision and mandate. A pure bottom-up transformation does not exist! That said, adoption and engagement ARE key ingredients, just like governance and – of course – technology.

Capgemini and MIT examined many companies and plotted them according to two measures: how mature are their digital capabilities and how mature is their leadership capability. Four quadrants describe the different types of companies there are, and… when correlated to financial performance, digital masters really outperform the others. The researchers are fairly confident that this is true causation and not just correlation, and in any case: it pays to learn from the top-performers.

So how to become a digital leader? There are four leadership capabilities that make you a digital leader: creating a transformative vision, design engagement, governance and fuse business and IT. And when to start? Now! Because if you look at what’s coming ‘you aint seen nothing yet’. If you’re not already at work becoming a digital master, how will you survive the big changes that are still coming our way: 3D printing, robotics, artificial intelligence, sharing economies, wearables, augmented reality, …

To read the original post and add comments, please visit the SogetiLabs blog: “BOTTOM-UP TRANSFORMATION DOES NOT EXIST”

Related Posts:

  1. Leading Digital: top-down leadership is essential
  2. Digital convergence requires transformation and omnichannel solutions
  3. Cloud and Open source go hand in hand
  4. What technology wants


Erik van Ommeren AUTHOR: Erik van Ommeren
Erik van Ommeren is responsible for VINT, the international research institute of Sogeti, in the USA. He is an IT strategist and senior analyst with a broad background in IT, Enterprise Architecture and Executive Management. Part of his time is spent advising organizations on innovation, transformational projects and architectural processes.

Posted in: 3D printing, Capgemini Group, Digital, Digital strategy, Human Resources, IT strategy, MIT, Transformation, wearable technoloy      
Comments: 0
Tags: , , , , , , , , , ,


omni_channel_1I have learned that there are three major things that most organisations need to do to be digital leaders, but let’s start with the three things that are involved in any software project, like building an app. First, you need to have a plan that defines what you want to do and how to do it. Second, you need to organise your data in a structured way, and third, you need to define the functionality of the solution.

What most organisations need in their adoption of digital solutions is to do something similar, but on a larger scale. The first step is to take control of the Digital Transformation journey, which basically means to have a plan for the whole organisation’s adoption of digital technologies. The second step involves getting the most important data structured across all units and systems, and usually involves a Master Data Management initiative. The third step is to make the needed functionality easily available to the whole organisation, and this is about creating omnichannel services that can provide all touch points, like webs and mobile apps, with what they need, even if the legacy systems can’t keep up with all those needs.

In the video (, Christian Forsberg shows a diagram where the different channels are reached with a fast moving set of touch points at the top of the diagram, and at the bottom you can see the much slower moving back-end systems. In the middle of the diagram, there are a number of service layers, the omnichannel services that take care of core processes, connecting to the back-end systems and providing the touch point with customised services. This allows the organisation to move quickly in the market without waiting for the back-end systems to catch up.

To read the original post and add comments, please visit the SogetiLabs blog:  BE A DIGITAL LEADER WITH OMNICHANNEL SERVICES

Related Posts:

  1. Omnichannel Services Should Deliver Functionality, Not Only Data
  2. Digital convergence requires transformation and omnichannel solutions
  3. Innovation and IT services company: buzz or reality?
  4. Digital marketeers : don’t underestimate mobile app projects

Christian Forsberg AUTHOR: Christian Forsberg
Chris Forsberg is Sogeti's Global Digital Channels Lead Architect, and his passion is apps and the Internet of Things. He has been involved in the implementation of more than 100 apps on iOS and Android, and most with integration to back-end systems.

Posted in: Big data, Business Intelligence, communication, Developers, Omnichannel      
Comments: 0
Tags: , , , , ,


ECMWhat does ECM governance really mean today? Is it a kind of dictatorship where IT department governs the world of your company? Does it change based on the technology you use?

In Sogeti Switzerland we have run several intranet programs during the last years and we have realized that an ECM program is more a journey rather than a project, a vision rather than a roadmap.

Three specific axes are necessary for a good ECM governance: with the right people we create rulesprocesses and responsibilities, aiming to guide and control a whole platform and this is completely independent from the platform you will use to build the solution.

For a good recipe and a delicious ECM cake don’t forget the following points:

  • ECM2The team: Build, run and improve the right team with the right people; when we talk about collaboration we should really mean it: business stakeholders, IT leaders, trainers, end users.
  • Information Architecture: We are moving from a knowledge centered intranet to an employee centered world therefore don’t store the content right, but the right content instead.
  • Processes: Everything should follow a process, because in 6 months you will not remember why you made that specific choice. Processes will help to avoid anarchy and to keep people focused on what it really matters: their daily basis work.
  • Permissions: Permissions policy are not a Damocles sword, but the only way to avoid the chaos; focus also on change management and training system in order to reward users and authorize them to do things only if they have followed the dedicated training.
  • Search, but also…serendipity: An enterprise search is today mandatory, of course, but don’t forget also the fun part of serendipity: give to your users not only a way to find things – which assumes that your users know what they are looking for – but also to discover things.

The ingredients above will help you to structure a functional ECM program with your clients. Keep in mind that technology matters, but people are even more essential to your business; what we deliver to our clients is much more than a simple intranet or software, it is something that will help them work better and improve productivity.

To read the original post and add comments, please visit the SogetiLabs blog: ECM GOVERNANCE: TECHNOLOGY MATTERS, PEOPLE MORE

Related Posts:

  1. The trend in IT: focus shifts from process to people
  2. Testability matters
  3. Governance Advice 101: Start measuring return
  4. What is really wrong with people?



Manuel Conti AUTHOR: Manuel Conti
Manuel Conti has been at Sogeti since 2010. With a technical background he leads onshore and offshore teams mainly in the use of the Microsoft SharePoint platform, building intranet and internet web sites.

Posted in: Business Intelligence, Human Resources, Opinion      
Comments: 0
Tags: , , , , , , , , ,


Why SOA?

If you’re looking for a way of designing, developing, deploying and managing your enterprise systems in a way that closely aligns your business goals and existing technical solutions, then Service-Oriented Architecture (SOA) is more than likely the logical solution.

Your SOA strategy should be integral to your existing Enterprise architecture designed by the Board of Directors and Management to achieve the organisational and operational goals of your business.

SOA is not, however, without its challenges, given that it is usually implemented with fast-evolving Web services using tools that have yet to reach the requisite level of maturity and taking into consideration the fact that really comprehensive testing is crucial to its success.

In order to maximise the business benefits that can be derived from the complex integration of business services, Service Oriented Architecture (SOA) requires a clear business vision and a way of governing its use across the enterprise.

From a testing perspective, the very flexibility SOA offers brings challenges of its own. The complex landscape of web services, middleware services and existing legacy services can render traditional testing approaches ineffective. A SOA testing approach formed specifically for the SOA architecture it services – modular, standardised, and driving re-use wherever possible is fundamental.

SOA Testing

SOA testing is a combination of: service testing, process verification, Test Data Management (TDM), accelerated SOAP automation. It also includes enabling practices such as continuous integration testing and service virtualisation. Testing teams need to test the systems at the service provider and the client end interfaces to ensure that the systems are completely error free. Tests also need to be grouped correctly into a regression suite with a workflow based data provisioning system.

Is In-house Testing a Viable Option?

If this all sounds highly technical, it’s because it is and herein lies one of the biggest challenges in SOA testing: to get it right in house your testing team would need to learn new technologies, processes and tools and understand complex supporting work streams required by SOAs unique architectural ecology. Your existing classical tools are unlikely to, for example, be able to test non-UI components, interpret messages that flow across an ESB or handle SOA protocols such as SOAP and WS-Security. The new SOA tools will be unfamiliar and the process is highly complex and requires expert handling. If software testing is not a core competency for your business then you are unlikely to have the time, technical resources and budget to carry out the required level of testing within your organisation, so it makes sense to look into outsourcing it in order to ensure it is carried out the highest possible standard. This of course raises the question, “who should I outsource to?”

The Pitfalls of Using a Single Provider for Development and Testing

At first glance it appears to make sense to allow the vendor who is carrying out the SOA development to also implement the testing, after all you’ve already explained your business structure to them and they obviously know the product well, you’ll save time drawing up a single contracts and it’s easier for management to deal with one provider.

However seeking to save time and effort with a single contractor creates a potential conflict of interest bringing unnecessary risk to the development process that could compromise the integrity of your SOA system. A vendor who is also developing your system is either already happy that the product is of sufficiently high quality or knows that it is perhaps not the most ideally suited for your purposes but is seeking to meet a sales quota. In either situation there is a possibility that they may not carry out the requisite depth and breadth of testing required to ensure your SOA system is running at the optimum level required to meet all of your goals.

In addition in a business where QA testing is not the core competency, testing is often passed to more junior members of staff who are working their way up to being developers and these contractors simply do not have anything approaching the same knowledge and expertise as a dedicated testing company. Another consideration is that testers who work for developers are under pressure from the development team to execute the testing as fast as possible and not delay delivery and payment, which can lead to incomplete tests being carried out and mistakes being made.

The Positives of Hiring Separate Testing Experts

In addition to these negative motivators, there are also some weighty positive reasons to instruct a separate business to carry out your SOA testing. Outsourcing offers higher quality work coupled with a faster time-to-market, and the value to cost ratio is lower than carrying out the testing in house or with a single vendor, meaning that your overall ROI should be higher.

When you hire a separate software testing company you know that you are acquiring the services of a totally impartial, trained expert who has chosen testing as a career path and who is under no pressure from the development team. By dividing the development and testing you can also outsource to smaller businesses, reducing your cost and also avoiding the employee churn that often occurs in larger companies; in short, you get a more personal and bespoke service for less money.

As Forrester states in their webinar, ‘Selecting a Supplier for Outsourced Testing Services’, “Choosing an outsourcing supplier for testing is complicated by a number of factors, including the willingness to adapt existing, internal processes and activities to an outsourced model and the possible need to manage outsourced development and testing relationships separately.”

Selection Criteria for the Right Testing Company
At Sogeti we think that some of the most important selection criteria you should be looking for in a partner include:

  • SOA testing services that recognise testing from a business needs perspective
  • A SOA test team that can work in synchronicity with your development teams to ‘parallel run’ both the service and the interface testing
  • The ability to ‘virtualize’ the whole service early, so there are no nasty surprises – ironing out defects earlier and at lower cost
  • The right onshore resources to stand by your side, combined with real-time performance dashboards to provide clear progress updates
  • A proven track record that demonstrates their depth and breadth of experience in SOA testing, with domain experience in every major vertical – from banking to telecoms to fashion
  • SOA and middleware factories that can flex up or down at need with a commercial model that can also be based on outcomes, not resources.

In our clients’ experience using a specialist independent testing supplier can help you realise cost savings well in excess of 30%.

Sogeti’s Expertise

In a benchmarking study of testing services from 13 major vendors leading independent analyst Ovum has rated the combined testing practice of Capgemini and Sogeti number 1 worldwide. Ovum commented in particular on our “ample capacity”, test process expertise, customer intimacy and responsiveness. If you would like further information about SOA testing or any of our other testing services please visit our website or email your enquiry to

Darren Coupland AUTHOR: Darren Coupland
Darren is the sector head of telecommunications at Sogeti.

Posted in: Business Intelligence, Enterprise Architecture, integration tests, test data management, Uncategorized, Virtualisation      
Comments: 0
Tags: , , , , , , , ,


red-pyramidWorking on a large transformation project for a client in the Banking sector, I recently had the opportunity to think about how to build an innovation stream within the project. This project was fully oriented on IT infrastructure and the main challenge for this customer was to push innovation for IT in order to improve the IS infrastructure.

Preparing a dedicated session with Pierre Hessler on this topic, I would like to emphasize two very relevant points that Pierre stressed during the session.

Point 1: Digital changes the position of IT. The drawing below tries to explain this evolution:


Step 1: Apply before ERP Era


Step 2: Co-Create (ERP Era)


Step 3: Inspire (Digital Era)

As we can see, the position of IT is really very challenging in the new world and a kind of Schizophrenic role is rising up i.e. be upstream in order to provide the organisation with all technology benefits, be downstream to be able to implement the technologies in a very short duration due to the time to market constraint.  Pierre Hessler called this new challenge “Double or Quit” due to other internal and parallel competitors working on Digital (Chief Digital Officer, Chief Data Officer, etc.)

Point 2: Maslow Pyramid Analogy


This second point could be linked to the first one showing the different stages that an IT department has to pass in order to become the Business Partner. This analogy has been built by the CIO of a large automotive company and has been reused and adapted for our client. The table below indicates the way that the Maslow pyramid could be crafted for an IT department:


This model could be applied and used to measure the improvement of an organisation. Does that make sense?  Feel free to engage!

To read the original post and add comments, please visit the SogetiLabs blog: IT & DIGITAL: DOUBLE OR QUIT OPTION

Related Posts:

  1. Regulation: Innovation’s Double-Edged Sword
  2. Be a Digital Leader with Omnichannel Services
  3. Innovation is not an option
  4. Digital convergence requires transformation and omnichannel solutions


Jacques Mezhrahid AUTHOR: Jacques Mezhrahid
For 25 years, Jacques has been working in the software services industry and was involved in various and innovative projects for different market sectors. In charge of Digital Convergence Offering for Sogeti France since 2012 combining mobility, collaboration, architecture and Big data, he and his team are developing solutions, skills and expertises within the company.

Posted in: Developers, Human Interaction Testing, Human Resources, Innovation      
Comments: 0
Tags: , , , , , ,


Machines Fit for Mensa

Allegedly a buzz word that was accidentally coined by co-founder of the MIT Auto-ID Center, Kevin Aston, at a Proctor & Gamble presentation in 1999, the ‘Internet of Things’ (IoT) is creating a brave new world where physical objects are being integrated into our networks and lives, usually via the cloud, to create a smarter living and working environment.

IDC expects IoT technology and related services revenue to grow to $7.3 trillion by 2017 and estimates there will be as many as  212 billion connected devices by 2020 (though Cisco gives a rather more conservative estimate of 50 billion devices). So what does this mean in practical terms?

  • For Dutch start up, Sparked, it meant attaching wireless sensors to cows so that when one of the herd is sick or pregnant the farmer is notified of their condition and whereabouts, allowing him to act accordingly and resulting in 200MB of data per cow per annum.
  • In the Health sector, Corventis has connected their mobile cardiac monitor to the IoT so you can even have a smart heart.
  • Hive from British Gas connects up a customer’s thermostat, boiler and router, allowing them to control temperature and switch heating on and off via an app or website.
  • The largest Brewery in Switzerland is the proud owner of intelligent beer kegs which automatically order a refill when the amount left inside reaches a certain level – therefore they never get empty and customers are safe in the knowledge that their favourite beer will always be on tap.

The IoT has the potential to impact every area of our business and personal lives in a big way, and transform every industry – from health to transport, entertainment to food & drink – whilst also affecting our internal business processes such as supply chain management, distribution, telepresence and document management.

When Good Machines Go Bad

So what could a typical morning look like for someone using ‘smart’ devices? Let’s imagine that your 8am breakfast meeting is postponed for 1 hour, but there’s also a huge traffic jam on the motorway that will increase your journey time by half an hour, meaning that you could have a 30 minute lie-in if only you were aware of all this information. Worry not! Because your devices are all connected by the IoT, these messages are communicated to your alarm clock, your coffee maker, your hot water and your bath, automatically giving you that lie in, saving you money on your gas bill and making sure both your bath and your morning coffee are still deliciously hot when you eventually spring out of bed. Sounds perfect right?

But wait! What would your morning look like if something went awry with one or more of these communications?  Your alarm goes off 2 hours late, you wake up to no hot water, a stone cold cup of coffee, and you’ve lost an important business deal because you missed your meeting. Also let’s face it, if the smart traffic lights are broken when you are eventually en route to work in your self-driving car built by DARPA or Google, you could be dealing with something a lot more serious than missing your caffeine fix and having to wear flippers in the bathroom! OK so that’s probably a bit over dramatic, but the point is that despite some extremely helpful use cases, a lot more can go wrong.

The Internet of Threats?

The benefits of the IoT are clear – Communication, Control through automation and Cost Savings (the 3 c’s), and the same IDC report we touched on earlier in this article suggests that the greatest IoT opportunities will be initially in the consumer, discrete manufacturing, and government sectors. But just what are the possible threats?

  • The IoT is based on an Open Source foundation and, as we saw in our earlier blog post Mutiny & the Bug Bounty, the security of OpenSSL was brought to its knees by the recent Heartbleed bug.
  • There have already been several examples of smart products being hacked. Black hat hackers have, for example, already turned Google’s Nest thermostat into a network traffic sniffer spy, and the Belkin Wemo Home Automation firmware was originally found to have 5 separate security vulnerabilities, once it was already installed in consumers’ homes.
  • Another really interesting example occurred at retail store Target when attackers stole customers’ credit card details by accessing the retailer’s point of sale devices via vulnerabilities in the store’s smart air conditioning system!

There are a number of other threat factors peculiar to devices connected by the IoT that need to be considered at the design and build stage. One is that they are not typically end-user serviceable, another is they may behave differently in areas where bandwidth is restricted or in higher latency environments, and thirdly that the relationship between machine to machine or machine to end user may be transient, for example a mobile phone that is temporarily linked to a hire car. There is also a lack of contact and connection between the engineers who create the physical smart “things” and those who install them. All of this combined with the ability to spy on and steal from devices connected by the IoT, raises serious issues of public and personal privacy and security, legality and ethics.

The Internet of Tests

Each component part of the IoT (network, application, mobile and internet) has its own security and privacy issues so it is unsurprising that, when all of these things are combined, the potential problems are vastly multiplied. So what is the best way to pre-empt these potential issues and diminish them as much as possible?

There are various ideas that could help, such as separating out the networks so that the IoT devices can’t interact with things that are on a protected network and building devices that are designed to die after a given time span. Ultimately though, the most effective way to ensure that your IoT devices do not fall foul of all of these security, privacy, legal and ethical threats, is to create an IoT-specific testing road map and vigorously test at every stage of development, including replicating the installation environment. It is clear that this new technology requires a new testing strategy.

So what are the challenges of Testing the IoT? Well, as smart devices become more prolific and widely used, the end user environment could be hot, freezing, wet, humid, at altitude, in motion or very noisy – all of which impact the effectiveness of the device. A smart phone alone can now have 20 million lines of code, so these ultra-connected devices are of course extremely complex with more room for error. Each device also uses a wider range of resources such as memory, processing power, bandwidth and battery life; all of which need to be tested. Furthermore, traditional hard and soft testing tools will need to be upgraded to cope with these greater levels of complexity. High levels of competition also mean that there are pressures regarding time to market and costs.

A successful IoT test strategy will need to include, at least:

  • A skilled test team with experience in web and embedded environments, hardware, systems and network knowledge, performance testing expertise and
  • Practical threat analysis
  • Attack Trees and Threat Models built early on, to inform decision making
  • A combined automated and manual testing strategy
  • Production hardware schematic review and verification
  • Testing manufacturer’s vulnerabilities
  • Base Platform, Network Traffic & Interface Security Analysis
  • Verification of functional security design and architecture requirements
  • Functionality assessment
  • Security focused code reviews
  • Backdoor identification
  • Testing and proper calibration of the device sensors (usually with a defined stimulus signal)
  • Business integration testing
  • Rapid agile testing and reporting

Internet of Rights?

Now the Internet of Things is not a new phenomenon; the majority of us use it every day on our laptops and mobile devices and we have done for a few years. It is, however, set to grow exponentially in the next 24-36 months, which poses the question – how will we maintain our privacy and security?

We know the potential social and business benefits are huge – who doesn’t want to improve airport flow or use cheaper wireless technology in the cloud at work? And yes, as with all technology and indeed most things in life, there are risks and threats; but with the right approach to testing much of this can be eliminated so that the benefits far outweigh the potential issues.

So, IoT: good or bad, or simply here to stay regardless? Well, before making up your mind you may want to consider a few words of warning by Gartner Fellow, Steve Prentice, in a recent interview with ComputerWeekly, “Smart machines are close to outsmarting humans – whether driving a car or determining a medical diagnosis – leaving the human overseer with the responsibility but reduced capability. But, if we take the major step of changing our legal systems to give machines the responsibility for their own actions, can they also expect rights?”


AUTHOR: Sogeti UK Marketing team

Posted in: Cloud, communication, Innovation, Internet of Things, IT strategy, mobile testing, Technology Outlook      
Comments: 0
Tags: , , , , , , , , ,


DetectiveTesters are detectives. Often I spend a lot of time investigating how a certain software programme that needs adaptation can be tested. Some issues are:

- The system is not documented well and people don’t seem to be sure how the system is built-up and where the changes are needed exactly.

- The system is a giant black box of which the output does not always tell us much about whether the internal behaviour is as required.

- The software interacts with other systems, but it is a mystery whether that interaction can happen in the same way in the test environment or what is needed for it to work, theoretically.

In those cases, test execution often turns out to be hard, error-prone, and slow – even for small programme changes.

In any project approach you take, testability of the software that is being created or amended is a decisive factor in the time and cost of testing. When the required effort is deemed too much and management decides to drop (part of) testing it brings risk and a potential loss of quality/ a lack of confidence in the product.

However, testing is not a one-time activity as software is a living thing that constantly needs to catch up with its users’ demands. Therefore testing is something that needs to be done throughout the life span of the software whenever changes are made to it – and I promise you there will be changes.

Investing in the testability of a system gives return over its entire life span and TMap® recognises this value; it lists testability as a quality characteristic of a software system, alongside all-time classics like functionality, security, and performance.

There are a number of measures that can be taken to improve the testability of a system:

Improve transparency about what the system does and what it is supposed to do:

  • Keep updated system documentation
  • Write readable, well-structured code
  • Have a solid test basis (clear software requirements)

Use technologies and write code by which parts of the system can be tested in isolation:

  • Maintain a proper separation of concerns
  • Allow interim results of the system to be made visible, assessed, and even manipulated

Set up a full-fledged test environment:

  • Find the right balance between representativeness and flexibility (e.g. adjustable system date for purposes of time travel)
  • Opt for technologies in which tests can easily be automated

These measures may not be the first thing on the mind of any project manager, but they sure make the work of both developers and testers a lot easier and more effective; especially in the longer term – enabling to build in better quality while generally reducing the software’s total cost of ownership at the same time.

To read the original post and add comments, please visit the SogetiLabs blog: TESTABILITY MATTERS

Related Posts:

  1. Is it a good idea solving test environment problems “later”?
  2. Playing the blame game
  3. Create a safety net, not a hammock!
  4. Use Scrum and halve the test department!



AUTHOR: Wannes Lauwers
Wannes Lauwers has been working with Sogeti since 2012 as a versatile software testing professional who manages to bridge the gap between business and IT in a variety of industries thanks to his broad interests and educational background in business administration. With his process mindset, Wannes is committed to deliver long term solutions well aligned with business needs, thus guiding companies’ convergence of business and IT into true business technology.

Posted in: A testers viewpoint, Developers, Managed Testing Services, Opinion, Requirements, Software testing, Test Environment Management, TMap, Usability Testing, Working in software testing      
Comments: 0
Tags: , , , , , , , , , , , ,


iRobotDuring our childhood days we all dreamt about how life would be in the future, truly inspired by the wonderful movies and animated cartoons such as The Jetsons, Terminator, and the like. How many of us imagined having robots as our best friends? Or becoming heroes of the Earth? Or defending the human race from the humanoid attacks? Did we even imagine that we would actually witness some of the fiction characters in reality though?

Meet Jibo, the first family robot.

With more than $2m raised through crowd funding, this is certainly not far from being a reality; Jibo is a family robot and is designed to perform a few essential tasks specifically aimed at busy families.

As per the creator Cynthia Breazeal, Jibo is an alternative to smart phones and tablets with apps like Siri. It orients toward people, recognises voices, and provides personalised interaction. Some of its features include telling children stories or taking messages for specific family members.


Another type of robot is being made for medical and research purposes. Romeo is a humanoid created to help with assistance for the elderly and those who have lost autonomy.  It can open doors, climb stairs, and even grab objects on a table in order to be of help.

Yet another type of robot is the industrial robot, Tesla, which has been a pioneer in innovative manufacturing techniques. The Tesla factory uses robots of various sizes to put together a car in record time. With the combination of humans and robots, the Tesla factory is a model to be studied and understood.

Looking at the ever-increasing number of robots like Jibo and Romeo, it is the right time to look into creating standards and protocols to ensure that everything is under control. With the mélange of Artificial Intelligence (AI), robotics and EQ, the new age robots have slowly moved into our houses and workplace, performing the duties that we used to do on our own. Seminars and TED Talks show how the world can progress with these new robots and humanoids. Many of us would love to have a Wall-e or Rosie kind of robot at home. But would you be able to handle a robot which has gone out of control or a humanoid gone bad?

The International Organisation for Standardisation has brought in a few standards for different categories of robots: industrial robots, social robots, etc. For example, ISO 13482:2014 focuses on the safety requirements for personal care robots in non-medical applications. The hazards associated with robots have been identified and security measures are provided.

Ethical aspects of using robots in our lives are also gaining significant importance. Many questions have already been raised by various groups such as the Danish Council of Ethics, some of which are:

  • Will robots change the way we view human interactions?
  • In the medical field, is the use of robots a step forward or backward?
  • As robots do not actually have emotions, will the use of these in helping patients with dementia be a positive step towards their recovery or is this plain deception?

In a previous Sogeti labs blog post, Daniël Maslyn mentioned the importance of testing and expanding our robotics domain knowledge to ensure that these robots do not go astray. After all, humans created robots and we are error prone.

Though still in its initial and exciting phase, the field of robots and robotics has already acquired a lot of fans on one side and strong critics on the other. We, as technology lovers, must not get too caught up with the merits of robotics alone.  In the mad rush for personal glory, it is also essential to treat this field with enough criticality so that it remains sustainable and a viable solution to our problems.


  5. Image:
  6. Image:

To read the original post and add comments, please visit the SogetiLabs blog: I ROBOT… AM BACK!

Related Posts:

  1. The H+ shift of Google (Part 3/4: Robotics)
  3. Finally, yesterday’s future is here!
  4. 20 jobs of the future


Jumy Mathew AUTHOR: Jumy Mathew
Jumy Mathew has been part of the Sogeti Group since 2007. In his current role, as a Java Senior developer, he is responsible for the design and implementation of the EU institutions' HR application (SYSPER2). He currently works for the DIGIT unit of EU commission. He is also involved in mentoring and coaching resources involved in his project.

Posted in: Automation Testing, Business Intelligence, Developers, Human Interaction Testing, Innovation, Open Innovation, Opinion, Technology Outlook, Transformation      
Comments: 0
Tags: , , , , , , , , ,


truthIn the first part, some strange conclusions appeared to be drawn from data. Let’s try to explain what happened.


We analysed a sequence of data (people entering a mall, measured every 7 minutes during the day for a month) that we assumed to be periodic. So we “converted” our data from time to frequency to identify the rush periods (and we “converted” it back to check how close we were).

What we forgot in the process was that our sample rate (fs=once every 7 minutes, 2.38mHz, as it is “slow” we have to use milliHertz units) didn’t permit us to “find” anything with a frequency higher than fs/2 (1.19mHz in our case, something happening more than once in 14 minutes). A key aspect in converting information from the time domain to the frequency domain is that you need at least two data points influenced by a frequency in you observation period to be able to measure it.

“But we didn’t do anything wrong!” one could argue as we found 45 minutes (0.37mHz) and 17 minutes (0.99mHz) which are both below fs/2 (1.19mHz). But on the other hand we didn’t “clean” the data before applying our transformation tool. And there was evidently something (the main subway, up to 10 people every 12 minutes, 1.39mHz) hiding in our data. The 45 minutes was real (it’s the train frequency), but the 17 minutes wasn’t at all. It appeared, as a ghost, in the mirror of the Nyquist frequency (fs/2) due to “folding” (like folding a paper on a line at fs/2).

This folding is the first reason to always be careful in finding frequencies in measurements of phenomenon that can have variations above half the frequency rate. One way to do this is to increase the sampling frequency, here the same mall example with a one-minute sampling rate (still showing the estimation for comparison)

higher fsFig 1: higher sampling rate  

But it’s not that easy when you don’t know how “high” in frequencies reality can go, so applying a low-pass filter is usually the way to go, remembering that low-pass filters are not “vertical” and induce other artifacts (the stronger the filter, the more the phase is altered; some residue often remains if there are strong high frequency information and they should be cancelled out by generated added noise beforehand).

In more common-life experience, when you see a wheel rotating backward in a movie, or when you find a CD from an old mastering harsh, you will now know where it comes from (aliasing for the wheel and steep low-pass filters, low to no noise added beforehand plus sometimes residue of folding).

Regression to the mean

For the strange services performance improvement (and deterioration) of services comparing the best/worst performing groups in two successive tests (as in fact we discovered we did nothing between the two campaigns to improve or worsen their performance), I’ll let Derek Muller from Veritasium give you some explanation and examples:

The main point here is to always be careful comparing an oriented subset of measures (because the reason of the selection can often impact the comparison), always check the population of measures (is it already an oriented subset?) and the effect of the measure on the data (am I measuring the tool or the phenomenon?). Another good practice is to keep an open mind and try to evaluate other hypotheses as well (because our main line of analysis is often convergent with the result we would like to observe).

Or you can use this in future career moves…


With data becoming easier to acquire, tools easier to use, quantities easier to overwhelm, use the tools you know well, keep an open mind on your hypothesis, and always ask a specialist when dealing with complex analysis and high stake results.

To read the original post and add comments, please visit the SogetiLabs blog: THIS PIECE OF DATA IS LYING! (2/2)

Related Posts:

  1. This piece of data is lying! (1/2)
  2. Three Data Scientists Share Six Insights on Big Data Analytics
  3. Secure Your Piece of the PIE: of All the Trillions that Pervasive Interaction Engine IoT Will Bring
  4. Big data and trends analysis: be aware of the human factor!



Claude Bamberger AUTHOR: Claude Bamberger
Claude Bamberger has been an information systems architect since his first job in 1994. Claude’s foray in innovation has been recognized twice by ANVAR (now OSEO), from a school project that formed into a lab and for to a collaborative Talent Management solution but also by the EEC for a custom TV channel for high speed Internet providers that has been deployed in multiple countries from Asia to Europe.

Posted in: A testers viewpoint, Big data, Business Intelligence, communication, Developers, Opinion      
Comments: 0
Tags: , , , , , , , , ,