Enterprise-Security-300x200Security is critical to any successful software implementation. Especially recently, security breaches have received more and more press, and major data breaches have become PR nightmares for the companies involved. If there’s anything that these high profile security breaches have taught us, it’s that security cannot be an afterthought for your organisation. So what do you need to know before launching an IoT initiative? Below is a list of the top four security problems in IoT and their potential solutions.

Problem 1: IoT may increase your attack surface. Any device you add is another potential entry-point for a hacker. A hacker could potentially compromise / seize control of one of your devices, and could attempt to use it to access your systems.

The Solution: Use a pessimistic security strategy. All devices and service accounts need to be configured to have the minimum amount of permissions possible to perform their tasks. Only allow access for what is necessary. This can be configured in device-to-device firewalls, in your service account security settings, and in on-device firewalls.

Problem 2: Devices cannot always be stored in secure facilities. In some cases, you may need an IoT device to monitor something outside of your secure buildings. How do you prevent physical tampering? How do you stop someone from walking up to your devices and plugging in a USB drive to install malicious software?

The Solution: Monitoring, logging, and operating system-level security. Any actions that the device takes can be logged. Device up-time, down-time, and overall health should be monitored. Device operating systems should be configured for secure boot, which will require all software to be signed and validated when starting up to ensure authenticity. Any software that cannot be validated should not be allowed to run, which will prevent malicious programs from running.

Problem 3: IoT devices are autonomous – nobody is present to enter credentials. IoT devices might be asked to execute commands on demand. Since they are designed to be autonomous and not require user interaction, the devices need to decide whether a command should actually be executed or not.

The Solution: Treat devices like external users. All devices should be required to connect to your network just like any external user – with an authentication method that can prove that the device is who / what it claims that it is. Be careful not to share one account for all of your devices however; the goal is to establish accountability or traceability. Pair this with a pessimistic security strategy as discussed above.

Problem 4: Not many standards exist in IoT. As an emerging technology, many devices and their packaged software use their own ports and protocols. How do you deal with all of these different approaches in your IoT implementation in a secure manner?

The Solution: Use accepted standards where possible, and add device-level security. Sending data from a device to your infrastructure can be done using standards that have emerged from service orientation (REST for example). For non-standard protocols or non-standard ports, the software itself is the only thing that really knows how to detect suspicious packets / data. IoT devices need to have their own security layer to identify and reject suspicious or abnormal requests. Server-to-server firewalls will be required as well, but an on-device firewall will also be a must.

Security cannot be an afterthought. IoT is an emerging technology, which means security will not be guaranteed with out-of-the-box enterprise implementations. Remember to consider security early on in the process of launching an IoT implementation, and always consider your “what if?” scenarios. Emerging technology does not have to be frightening, but it does need to be strategic.

To read the original post and add comments, please visit the SogetiLabs blog: Security in Emerging IoT: The Top 4 Problems and their Solutions

Related Posts:

  1. Cloud Security is a Shared Responsibility
  2. Internet of things: Interconnection means new security and testing capabilities
  3. The Winds of Change in Cloud Security
  4. The Smart City Needs To Deal With The Same Problems As The Internet

Michael Pumper AUTHOR: Michael Pumper
Mike Pumper started his career with Sogeti in 2011. Since then, his career has rapidly accelerated both internally within Sogeti and at his clients. Through working with clients, Mike has gained experience in a variety of topics at many levels of expertise. He started his career at Baxter Healthcare as a Java / Spring developer, working closely with the business to develop a portal application. Mike started at National Merit Scholarship Corporation late in 2011 as a technical lead and solution architect, overseeing a small team that created a critical internal Java / Spring application that is still in use by the business today. After the completion of the Java / Spring application, National Merit embarked on an ambitious modernization project utilizing Microsoft technologies, including C# and WCF. The project is done in service oriented architecture, which Mike is in charge of managing and designing. Mike also continues to act as the technical lead, mentoring and leading a team of on-shore and off-shore resources as the project progresses. Internally, Mike has been committed to contributing to Sogeti’s excellence in any way he can. Mike took charge of reforming the technical interviewing process in Chicago in 2013, which gave interviewers more tools to ensure that recruits meet the Sogeti standard of quality. Mike has also given talks and lectures for Sogeti consultants and his clients on a variety of topics, including JavaScript, MVC, MVP, and MVVM, dependency injection, design patterns, N-Tier architecture, service oriented architecture, and continuous integration.

Posted in: Behaviour Driven Development, Business Intelligence, communication, Developers, Digital strategy, integration tests, Internet of Things, IT strategy, Security, Technology Outlook, Transformation      
Comments: 0
Tags: , , , , , , , , , , ,


Why do e-commerce players need to prepare ahead?

One of the fundamental aspects of e-commerce, and retail in general, is that every project and initiative, revolves around the holiday shopping season. According to online site, the holiday revenue from e-commerce sales has been increasing steadily over the last few years, and is expected to top $80B in the US. (Statista, 2015). Further, according to a recent survey by the US Census Bureau, the percentage of e-commerce sales as compared to overall sales has been steadily increasing in the retail sector and this ratio is expected to favor e-commerce even more, given the trends from the last 10 years (DeNale, Liu, & Weidenhamer, 2015).

In this series of my blog posts, I will outline various aspects of an e-commerce site that need to be analysed and tuned, to get ready for the upcoming holiday rush.

What’s a good starting point?

From a personal experience standpoint, I have found that the best point to start is to look at historical data and trends. Data from the last few years is a key to determining target metrics for the upcoming season (with a reasonable interpolation). For example, if your peak day in 2014 was $20M, and business forecasts a 10% growth for the upcoming season, it is very likely that you will have to handle a $22M day during this season. Given the somewhat unpredictable behavior of online shoppers, and the reactive nature of e-commerce systems in general, it would be a safe bet that your systems be able to handle at least a $25M day (if not more).

Performance testing

The next key component is designing repeatable, and highly configurable performance tests that can stress key components and typical workflows in your application. The amount of user traffic, the distribution of orders, etc. will be based on the forecasts, and site metrics gathered over the last two to three years. Several other aspects need to be considered and decided upon:

  • Is the environment for performance testing equivalent to production? If not, what is the weakest link as a percentage of production?
  • Is content caching available in the environment?
  • Are external integrations available? If yes, at what capacity do they operate?

There are additional factors that need to be accounted for tool / test platform selection. If you are testing through a content delivery network (CDN) such as Akamai, you will have to go through approved testing partners, e.g. Soasta or Keynote. Testing against your internal servers directly can be done using various tools such as JMeter or Neoload.

Last, but not the least, any firewall rules, or security restrictions against load testing need to be evaluated so that they can be bypassed during the load test (wouldn’t be fun to have every request end with a HTTP/403, right?)

In the next post, I will outline how to get started with looking at the hardware and software stack that powers your site.


DeNale, R., Liu, X., & Weidenhamer, D. (2015, 08 17). Quarterly Retail E-Commerce Sales – 2nd Quarter 2015. Retrieved 08 22, 2015, from US Census Bureau:

Statista. (2015, 03 01). U.S. retail e-commerce holiday season sales from 2007 to 2015. Retrieved 08 22, 2015, from

To read the original post and add comments, please visit the SogetiLabs blog: Holiday Shopping – How do e-Commerce Players Get Ready for the Mad Rush?

Related Posts:

  1. Connected Shelves with Sensors and The Future of Shopping
  2. IoT Implementation: Get ready to take the plunge
  3. Key themes for QA & Testing organizations to focus on during Digital Transformation
  4. Shakedown Testing! … What’s that?!



Nihar Shah AUTHOR: Nihar Shah
Nihar Shah has been a consultant with Sogeti USA since 2008, working with various multi-national retail clients in the Columbus region. A technology enthusiast by nature, Nihar has worked on various platforms from the IBM and Oracle stack, and has taught himself languages and frameworks such as Ruby, Objective C, and NodeJS. He has worked extensively in designing high volume, reliable and scalable systems using the J2EE stack. In addition to client commitments, Nihar is the practice manager for Digital Transformation practice for the Columbus region and is actively involved in crafting digital solutions for various clients in the region.

Posted in: Behaviour Driven Development, Digital, e-Commerce, Innovation, Internet of Things, IT strategy, Smart, Technology Outlook      
Comments: 0
Tags: , , , , , , , ,


databasesOne size fits all or one size fits none? The relational database model was invented by Ted Codd in the early 1970’s. There were a few early prototypes built, but the model became mainstream when Modern Databases built and marketed DB2 in 1984. By the early 1990’s, relational database systems began to be considered the standard and the only way to solve all database related problems. It didn’t matter what kind of workload one needed to support – relational databases were considered the panacea. In other words, a “one size fits all” concept evolved and stayed until mid to late 2000’s.

It was at this time that researchers and technology leaders started to question the concept. They realized that for every kind of problem, there is a better ways of solving it than the traditional relational database system. Said differently, they realized that “one size fits none”!

Modern Databases for Online Analytical Processing (OLAP) workloads:

Columnar Databases: Datawarehouse models, in a majority of cases, are represented as star schemas (or a variation thereof) with Fact tables at the Center connected to the Dimension tables via primary-key, foreign-key relationships. Analytical queries are written against the Fact tables and typically query and aggregate using only a small fraction of columns available in the tables. On the other hand, traditional relational databases are row-oriented, causing a dataset much larger than required, to get loaded into the memory from the disk. This is because the entire record needs to be loaded into the memory even when only a few columns of the record are referred to in the client query.

The modern take on improving the design is to shift the paradigm by 90 degrees and store data column-by-column rather than row-by-row on the disk. The approach offers multiple advantages over the traditional approach. First of all, it improves performance by allowing the system to load only the data actually required to serve the client query from the disk into the memory. Secondly, the data in a column can be encoded and compressed to a much higher degree than that in a row. This is because column data is of the same kind, whereas row data is not. Data compression leads to reduction in storage required, and reduces the amount of data that needs to be moved from disk to memory and vice versa.  Thirdly, the query executor performs much better in a column-oriented database when the decision to select a value is dependent on only a few columns. Hence, in a majority of cases a columnar database suits an OLAP workload much better. HP’s Vertica, SAP’s HANA and Amazon’s Redshift are a few examples of such commercially available databases.

Modern Database Systems for Online Transaction Processing (OLTP) workloads:

  1. Main Memory Databases: Moore’s law, over the years, has allowed the amount of main memory available in database systems to increase exponentially. On the other hand, data needs of OLTP systems (leaving aside Facebook like systems) typically do not grow exponentially. The entire catalog of Amazon, for example, is of the order of 1 Terabyte and does not grow exponentially as the growth is usually linked with the growth of the company’s business. Surprisingly enough, 1 Terabyte of RAM costs $30,000 or less and is definitely not a big deal for a company like Amazon. It also is a fact that, on an average, around 30% of time spent on executing queries is wasted on managing the Buffer Cache of the database system. Therefore, many modern OLTP database systems attempt to load the entire database in the main memory and get rid of the concept of Buffer Cache and complexities associated with it such as encoding/decoding of data, and sending data back and forth between main memory and the disk.
  2. Single Threaded Databases: Row level locking is a necessary evil in traditional database systems. It is necessary because these systems are multi-threaded and allow multiple threads to attempt to update the same data at the same time. It is evil because managing locks and associated latches is time consuming. Locking and latching, on an average, take about 30% of time available to a database. This time should ideally be spent on running database queries. A primary reason that traditional databases are multi-threaded is that they don’t want to keep a transaction waiting while the one ahead of it is spending I/O cycles loading data into memory from the disk. Modern main memory-oriented databases, on the other hand, can afford to be single threaded, as they don’t need to send data back and forth between memory and the disk. These modern single threaded databases are therefore able to spend more time on running user queries. Volt DB is one such memory oriented, single threaded commercially available system.
  3. NoSQL Databases: Another category of modern databases is NoSQL databases. NoSQL databases can prove to be a good fit for certain types of OLTP workloads. They provide the ability to scale horizontally, are fault tolerant, provide a flexible schema, and can accommodate data other than relational such as JSON documents. They also support a variety of data models ranging from the most simple key-value pairs to the more sophisticated models such as document-oriented, column family and graph databases. However, one needs to tread carefully before going for a NoSQL database as they sacrifice certain essential database characteristics such as the ability to “join” and the ability to wrap multiple DML operations in a transaction. Cassandra, MongoDB and CouchDB are some of the popular NoSQL databases out there.

Ashish Kashyap AUTHOR: Ashish Kashyap
Ashish Kashyap is a Senior IT Leader with over 15 years of software consulting experience in a variety of industries such as pharmaceutical, retail, airline, automobile, marketing, and utility. He is an experienced solutions architect and has designed & developed B2B and B2C kind of applications on a variety of platforms including J2EE, COTS, and SAAS. Ashish has worked on a number of highly visible integration projects involving Enterprise Application-to-Application (A2A) and B2B integrations using SOA technologies such as the Oracle Fusion Middleware Suite, out-of-box ESB solutions and custom development. In the process, he has built, managed, and mentored offshore & onsite design and development teams. Ashish led the Oracle and IBM practices at Sogeti. In these roles, he was responsible for sales and marketing efforts.Ashish is a Big Data/Analytics/Machine Learning enthusiast and has played the role of National Big Data Solution Architect at Sogeti. He keeps himself up-to-date with the latest technology trends in the area be it NoSQL databases, the Hadoop Platform or Analytics in the Cloud. If he is not reading blogs, then he is attending online courses on MOOC platforms such as Coursera, Cloud Academy, Udacity etc.

Posted in: Big data, Business Intelligence, communication, Developers, Digital strategy, Managed Testing, Technology Outlook      
Comments: 0
Tags: , , , , , , ,


Last year I wrote about how Cloud redefines the platform we develop in It’s the platform, stupid. With the recent release of Windows 10, the platform vision becomes even more pronounced. Windows 10 is not just a new version of the Windows operating system; it’s the last version. There will be regular updates with new features and feature changes, to evolve Windows 10. It is not likely however that it will get a new version label like Windows 11. This idea is not new. Do you know the version number of Firefox or Chrome that you are running? And do you really care? These and many other applications areupdated (almost) silently to the latest “version”. Like services in the Cloud you benefit from improvements, without having to pay extra.

Windows 10 platform

Silent updates are not just for your benefit. For the vendor too, it means maintaining only one version. Microsoft currently needs to maintain five different versions of Windows, from Vista up to Windows 10. This means that if there’s any issue in one of them, Microsoft needs to check the other versions as well. If all five of them contain the issue, it needs to be fixed and tested five times!! As the older versions of Windows phase out, Microsoft will save on maintenance cost. On the other hand, no new version also means Microsoft can’t earn money from people upgrading. Microsoft needs the services around Windows to make money, which is also why Windows 10 is not just a new operating system, but rather a platform to build on.

By making Windows a universal platform across devices, Microsoft can leverage other services it provides and rely on Windows to sell these services. For developers, it means they can create a single App that works on any device. PC, tablet, phone, Xbox, and HoloLens are (mostly) the same, and silent updates ensure the features set across devices with Windows are the same. That alleviates developers from having to write software that takes into account the differences between versions of Windows, just the form factor. In a sense Windows becomes “just” another service to build on in the platform.

To read the original post and add comments, please visit the SogetiLabs blog: It’s the Windows 10 Platform, Stupid!

Related Posts:

  1. It’s the platform, stupid
  2. 1985-2015: Windows 10 to Kick Off the Second 30-year Era of “Windows Everywhere”
  3. When do we see you on Windows 10?
  4. Bitcoin 2.0: It’s about the platform, not the currency, stupid!



Michiel van Otegem AUTHOR: Michiel van Otegem
Michiel van Otegem is Principal Architect for the Microsoft Business Line at Sogeti Netherlands. In that role he advises clients on strategy and architecture for the Microsoft platform in conjunction with other technologies, specifically focusing on Cloud and Integration. Although focused on Microsoft, Michiel has broad knowledge of other technologies and integration between technologies.

Posted in: architecture, Behaviour Driven Development, Big data, Business Intelligence, Cloud, Developers, Digital strategy, Marketing, Microsoft, Opinion, Transformation      
Comments: 0
Tags: , , , , , , , ,


buttercups_new_foot_3d_printed_prosthetic_footIn the age of the Internet of Things I am eagerly anticipating new amazing research. Something that I have been specifically following in the news, is the advances in prosthetics. 
When I was a kid I met a number of amputees who were struggling through life with the loss of their limb(s). My father worked as an HR manager at a government company that provided a workplace for people who had trouble finding jobs due to mental or physical disabilities. At that time, I met many of his colleagues and they instilled in me a desire to find ways to restore and/or improve the lives of people who lost mobility through the loss of limb or movement in limbs.

A very important image in my mind was the scene from the movie “Star Wars, the empire strikes back” where after losing his hand in a battle with his father, Luke Skywalker (the protagonist) gets a new hand fitted. I have been hoping for advances in electronics that could make a prosthetic like that possible.

In the past few years the advances that have emerged in this field have made my mind boggle. These advances were not just in electronics and mechanics, they have also been in the understanding of phantom pain, and treatment for that. One of the treatments is mirror treatment, where the visible movement of a missing limb is recreated by seeing the mirror image of the existing other limb make the same movement. This could of course be helped by actually having the limb back, but since growing limbs is still in its infancy, it may be a long time before we get that.

In the meantime, there have been advances in making the amputee feel again. This could be the next thing in treatment, seeing your artificial hand move does not do much to counter phantom pain. But when you can feel it move or touch again – that would definitely help.

As we also see advances in 3D printing we have been able to see new feet being printed for ducks and we have seen that it is possible to help young kids by printing a new prosthetic for them.

My interest in these advances also keep me looking for new developments in the field of electronic interfaces. One interesting development has been around for several years already. It’s the epidermal electronics development which make wearing electronics (essential with advanced prosthetics) lightweight enough to not hinder. I have not seen this kind of electronics be integrated with electronic prosthetics, but I am hopeful these fields of study will eventually merge.

Using an epidermal electronic interface may help in the development of a lightweight exoskeleton that can help people who have lost mobility, either with or without limbs. I wish this would have been possible for my friend Philippe  who died of muscle dystrophy this year. A hero in his own right with a tremendous will to live.

I will keep monitoring the news and web for new discoveries and maybe, if I am lucky, I may help people become whole again, man aided by machine.

To read the original post and add comments, please visit the SogetiLabs blog: Future Wholeness – The Advances in Prosthetics

Related Posts:

  1. Internet of Things: What Will the Future Bring?
  2. Disruption in Manufacturing: The Future Of Digital Factories
  3. 20 jobs of the future
  4. The Future of Wearables


Julya van Berkel AUTHOR: Julya van Berkel
Julya van Berkel is an Agile adept and coach and has been championing Agile and Scrum since 2007. In her role as Agile Coach Julya has helped numerous clients and colleagues in getting better at Agile and as teaches she has set up and taught hundreds of Agile and Scrum training and courses. For Sogeti in the Netherlands she helps set the direction in Agile and is involved in many groups within Sogeti and outside in the Agile community.

Posted in: 3D printing, Behaviour Driven Development, Developers, Innovation, Internet of Things, Open Innovation, Opinion, Smart, Socio-technical systems, Technology Outlook, Transformation, User Experience, Wearable technology      
Comments: 0
Tags: , , , , , , , ,


Since the start of the App revolution we have seen some pretty amazing Apps, as well as a great deal of poorly designed and pretty much unusable Apps. I have always liked Apps that have a clear focus. Apps should not start with a menu asking what the user wants to do. Apps should start with the core functionality and with what the App is meant for, because the App designer already knows what the users want to do, when the App is started. The App designer knows, because he researched and listened to the user group.


I am not against menu like navigation to separate functionality in an App. It is however never the goal of the user to start with the menu. I have never heard a potential App user say: ‘As a user I want to use a menu’. Separation of functionality enables the App to be used at a broader level than remaining restricted to one functionality. I do not believe in one functionality per App. I do believe in one core functionality. The App focus: should be the core.

Layered Approach

The core functionality is the reason the App exists. It is the materialisation of the most ‘epic user’ story. It is why the user opens the App in the first place, usually. So, this also, is what most of the screen real estate should be used for, when the user opens the App. It should be clear by now that this is never the menu. So where do we put the other valuable interaction that is logical to access in this App but not necessarily the most ‘epic of epics’? We use the first layer of navigation, the tab bar or action bar.

By separating the additional App values from the core in this bar, we bring it clearly and logically in front of the user, without the need to ask him where he wants to start today. The user can easily access the functionality and doesn’t have to find his or her way around the App: it’s clear but not in their face. To make sure the App is still focused, there is only room for a couple of additional App values. Typically, the initial four fit the tab bar, the rest is one more click away.

Now that we have our most epic and additional App values in place, we sometimes are left with some stuff that we have to add. Typically, stuff that is left, is a large part of what the users will never mention as a value. Think of security, login, and other configuration and settings. Stuff that has to be in the App because it matters to the working of the app but not so much to the user. This group of features – as we can hardly call these values – we can position in the Settings menu or the ‘Hamburger’-menu.

Image 1

This clear layered approach of core, additional App values and more stuff is logical and simple to get rid of the start-up menu and presents the user with an elegant and well-thought-through App design.

One more thing: Contextual Core

Thinking of the core as an epic of epics works most of the times. However, context of the user can change the initial expectation of the user and influence the way we look at the user values. This context of location and time can also change the order of the values. Making one user value more epic than another, for now, in this context. By taking into account these user scenarios with context, we can be mindful of the expectations and adapt our solution to the user and usage context. Context-aware applications can assist us efficiently, so that we can accomplish our tasks with fewer clicks.

To read the original post and add comments, please visit the SogetiLabs blog: The Next Wave in App Design

Related Posts:

  1. Where High tech stuff meets stone age technology in a refreshing wave of marketing and design
  2. Objects Of Desire
  3. Don’t managers deserve beautiful apps?
  4. Privacy by design: a framework for design in the age of big data


Arnd Brugman AUTHOR: Arnd Brugman
As an innovator Arnd helps organisations with innovative project delivery - from the initial idea and inspiration through to the vision, strategy and roadmap all the way through to assissting with proof of concepts and pilots. He has significant experience with innovation, product development and service delivery.

Posted in: Application Lifecycle Management, Behaviour Driven Development, Business Intelligence, communication, functional testing, Human Interaction Testing, mobile applications, Security, Usability Testing, User Experience, User Interface      
Comments: 0
Tags: , , , , , , , , ,


It never ceases to amaze me how on social media, people I consider to be among the smartest of my friends and business connections still reply to stupid quizzes, contests or puzzles of shady origin. These quizzes have many forms: a series of numbers of which you have to guess the next one, a diagram in which you have to guess the number of squares, a series of sums in which you have to predict the next one or something stupid like ‘name a country (or a fruit) without an a’. All worded in such a way that they are teasing out the replies.

A somewhat different category of posts with similar goal is the ‘click reply to win this ‘xyz’, in which xyz can be anything from a vacation to a brand new luxury car. Apparently freshly started Facebook pages are not suspicious at all when there is even the remotest chance to win a brand new BMW. (Yes, genuine sweepstakes exist, but these are often easy to spot: well established websites and social media profiles)

Of course, all these exist to extract likes, comments, links, reposts, retweets etc. To gain influencer score, to increase reach, to harvest information. Your comment today will probably mean you’ll see a more commercially flavored post from a similar origin later.

So what can we learn from all this?

One, people are people: they respond to challenges and social circumstances in such a way that optimism or ego perhaps overtake ratio sometimes. I find this a very positive thing: every time I see someone reply to one of these, I think ‘ah, the sign of hope, the sign of a confident optimist’.

Second, it again proves that social media ‘likes’ are really worth harvesting and that it doesn’t take much to do it. On this topic, I’m a bit less optimistic, especially if I put this in the context of machines that think. What spam was for email, these gimmicks are for social media, but slightly smarter. And if machines really get smarter, will they learn to create their own ‘irresistible content’ and swamp the web?

And third, all platforms really are part of one digital realm. Even LinkedIn is not immune to this. Basically any platform that has some social dynamic will be susceptible to this type of content. I’m guessing that over time, the differences in content between the different platforms may fade, as they all want to focus on content that is most eagerly shared and discussed.

But, personally, perhaps the most annoying thing I find is that I’m always tempted to post ‘People, don’t reply to this, it’s spam’… but in doing so I would fall into the same trap. It would add my 2cts of social media value to someone’s profile, which I’d much rather spend on someone I really know and like.

To read the original post and add comments, please visit the SogetiLabs blog: 1, 3, 7, 15, … I bet you don’t know the next number in this series!

Related Posts:

  1. Relating unrelated things with social data: Cyclists are more likely to own tablet computers
  2. The Dark Side of Social Media – provocative but very real!
  3. Bitcoin 2.0: It’s about the platform, not the currency, stupid!
  4. VINT series on Big Data now complete


Erik van Ommeren AUTHOR: Erik van Ommeren
Erik van Ommeren is responsible for VINT, the international research institute of Sogeti, in the USA. He is an IT strategist and senior analyst with a broad background in IT, Enterprise Architecture and Executive Management. Part of his time is spent advising organizations on innovation, transformational projects and architectural processes.

Posted in: Behaviour Driven Development, Business Intelligence, communication, Digital strategy, e-Commerce, Internet of Things, IT strategy, Opinion, Social media, Social media analytics      
Comments: 0
Tags: , , , , ,


IoT in RetailThe Internet of Things (IoT) in Retail has been redefining the buying and selling experience for decades.  Remember when bar code scanners first arrived in our local stores speeding the checkout process? Or the first time we entered a credit card number into our browser to make an online purchase? Or the moment we became both customer and clerk as we checked ourselves out at a register? The Internet of Things has actually changed the course of the retail business. No wonder then that investment in the Internet of Things in the worldwide retail industry is expected to hit $37.6 billion and is expected to grow 20-35% per year over the next several years.

Today retail is synonymous with handheld devices that empower the customer and the retail associate. These devices and digital signs on shelves and walls provide a truly mobile experience and allow dynamic pricing, personalised coupons, and purchases anywhere in the store. Both buyers and sellers have easier access to up-to-the minute product, pricing, inventory and competitive information. And that’s a huge transformation in the retail value chain. Sogeti has helped

Several retailers roll out tablets and concierge apps to enable store associates access virtual product catalogs and customer profiles remotely and create a delightful in-store experience for the customer.

Up, close and personal with the customer – thanks to the Internet of Things!

Today retailers use technologies like Radio frequency ID (RFID) tags, geofenced stores, headcount cameras and gesture recognising displays that track customer movement within the store and help create richer and more personalised shopping experiences. This “fog” of devices and the Internet of connected things in the store is made omniscient by Cloud connectivity via wifi, bluetooth, 4G LTE and other communication protocols that provide access to Big Data storage and fast analysis. Insights are super-fast and enable immediate action by both the buyer and seller. Retailers work with marketing research experts and at the same time, use Cloud computing resources to help crunch numbers, and look for buying patterns and trends through statistical analysis and machine learning.

The line between the real and virtual worlds is blurring

In the next few years, expect to see science fiction become retail fact, as augmented reality enhances trying-on-and-buying everything from clothes, cars and furniture to books, movies, and video games. Expect concerns over privacy (though important) to be offset by the convenience of highly personalised services and customised information.  IKEA lets you paint, style and place virtual furniture anywhere you drop their product catalogue through your smart phone or tablet.  Lego lets you see and rotate a fully constructed and animated Lego set on top of the box at a kiosk or through your device.

Experience replicators, holodecks and robots in your local stores, neighborhood and eventually at home as 3D printers, holographic displays and drones allow you to highly customise existing products, bring your own ideas to life and take delivery in hours.  Home Depot partnered with MakerBot last year to create a 3D printer station in the store where you can scan and customise a hard-to-find knob, handle or part almost as easily as matching a paint color.  Tablet-based, table-top and wearable holographic displays like CospeTech Holho and Microsoft HoloLens will allow for more immersive interaction with virtual products reducing the need for large amounts of inventory and retail space. Amazon, Google and others are actively refining driverless cars, trucks and drones to deliver your customised orders the same day (or hour).

As Doug Stephens, founder of industry website Retail Prophet and author of The Retail Revival states, “We will see more disruption in the next ten years of retail than we did in the previous one thousand.”

To read the original post and add comments, please visit the SogetiLabs blog: The Internet of Things in Retail

Related Posts:

  1. Future of Retail: Nightmare on Elm Street
  2. A OS for the Physical World (of retail)
  3. A Facebook for (the internet of) Things
  4. Connected Objects are Physical Avatars for Digital Services

AUTHOR: Erik Leaseburg
Erik Leaseburg joined the Sogeti Dallas office in May 2014. As a senior manager and solutions architect, Erik is focused on architecting, selling and leading innovative client projects in the Microsoft and Digital Transformation practices; developing technology partnerships with Microsoft, Xamarin, AIMS Innovation cloud monitoring, Metaio augmented reality,…; and fostering developer community and STEM outreach through work with Microsoft DigiGirlz – Intro to Game Development, Dallas Computer Vision-aries – Augmented Reality, IdeaWorks Fort Worth, Startup Weekend – Cashflow Mobile App, Dallas Entrepreneur Center – Kinect Hackathon, Girl Scouts – Intro to Web Development, and local .NET user groups. Erik has 20 years of architecture, evangelism, management, development, consulting, entrepreneurship, sales, training, outreach, public speaking, and R&D experience. He started his career in consulting at Andersen Consulting/Accenture, followed by 12 years of product architecture, services and sales at Microsoft, and more recently ITR Mobility and Neudesic. He has broad industry experience in communications, education, retail, transportation, oil and gas, finance, healthcare, insurance, engineering and software development. Erik has deep and broad experience with Cloud (Azure PaaS/Web/Mobile, IaaS/VMs, O365, SQL, TFS, BizTalk, Service Bus), NUI (Kinect gesture/audio, Metaio augmented reality, Surface, Robotics, IoT) Cross-Platform Development (Mono/Xamarin/C#, HTML5/CSS3/JavaScript, Mobile/Web/Desktop, Unix/Linux/Mainframe), Application Lifecycle Management (Agile/Scrum/CMMI, Visual Studio/TFS, Test/QA), SOA/ESB (BizTalk, XML/XSD/XSLT), ERP (Dynamics, SAP, Oracle), Business Productivity (SharePoint, SQL/BI) and Microsoft Development Frameworks (.NET/C#/VB, ASP.NET, MVVM/MVC/EF, WCF/WPF/XAML/WF, LINQ). Erik is certified as an Azure MCP, Visual Studio/BizTalk MCTS, MCSE, MCDBA, and MCSD. He has delivered over 200+ webcasts, been a speaker at many TechEd and user group events, conducted 100+ training classes, and authored several papers.

Posted in: Behaviour Driven Development, Big data, Business Intelligence, Cloud, communication, Developers, Digital strategy, e-Commerce, Innovation, Internet of Things, IT strategy, mobile applications, Virtualisation      
Comments: 0
Tags: , , , , , , , , , , ,


HC from TVFollowing my colleague Philippe André’s blog post about Hyperconvergence, I’d like to add my point of view regarding this topic.

A few years ago, the data-center infrastructure was a client-server generation. Then arrived Virtualisation — with different tiers for compute (standard servers) and storage (SAN), connected with the network. Today, we are in a Cloud generation, where hyperconvergence is redefining the way we work. Hyperconvergence integrates compute and storage in the same box.

Hyperconvergence offers multiple benefits.

I’ve had the opportunity to work with the Nutanix system. And I am putting forth my point of view, based on my experiences on this system.


With a traditional infrastructure, you have to invest in the storage technology, meaning the first investment is very high. With hyperconvergence, the concept of “pay as you grow” is making its mark on business models. Indeed, you buy only what you need. If you need more compute and storage, you just have to buy one or more nodes, depending on your needs.

Simple to manage

With most storage technologies, you need some specific skills. With hyperconvergence, you don’t. It’s so easy to install, deploy and manage. The best word to describe this benefit is “simplicity”. I’ve never been an expert in storage; but now, I can easily manage the storage of my clusters in the data-center.

Easy to scale up

With hyperconvergence, you can very easily scale up your infrastructure. If you need to add some nodes to your cluster, you can do so with just three clicks of the mouse. And with Nutanix, there is no limit to the number of nodes you can add.

High performance

Finally, you are assured of high performance. On a Nutanix node, you can have a part of the storage on the SSD disk, and the other part on a standard disk. When a VM is accessing a data pretty often (hot data), this data is stored on the SSD disk and assures you of high performance. When it is less often accessed (cold data), this data is stored on the standard disk. The storage is a “software defined” model, meaning that it is managed by a specific software hosted on the node itself.

Of course, you have the possibility to implement some scenarios for high availability and disaster recovery. So, for me, hyperconvergence is less expensive, assures high performance and availability, is easy to scale up and manage. Hyperconverged infrastructure is now here and is the new way of designing infrastructure. If you need more information regarding hyper-convergence or Nutanix, do not hesitate to contact me.

To read the original post and add comments, please visit the SogetiLabs blog: How hyperconverged infrastructure convinced me?

Related Posts:

  1. One Data per World
  2. Is it enough to just upgrade your IT infrastructure?
  3. Hyper-Convergence in Data Center: Promises and Reality
  4. Master Your Infrastructure Like You Master Your Code


Gregory Bouchu AUTHOR: Gregory Bouchu
Gregory Bouchu is Senior Infrastructure Consultant for Sogeti Switzerland since 2013. As a consultant, he is working for Infrastructure project based on System Center. In parallel to this, he is working as Datacenter Stream Leader to develop commercial offer for Datacenter Management with Microsoft Technologies based on Windows Server, System Center and Windows Azure Pack. From 2012 to 2013, he worked for Microsoft in France as Premier Field engineer (Reactive and proactive missions for Premier customer), and from 2010 to 2012, worked as a Microsoft team manager for Linkbynet in France (French web hoster). Previously, he was a french military (from 1993 to 2010), where he began as a tank leader for eleven years before beginning his IT career. He has worked on the design, the architecture and the deployment of Microsoft technologies (Exchange, Configuration Manager,…) Also he is speaker for events like Techdays, Knowledge Sharing in Sogeti, and has been awarded MVP (Microsoft Most Valuable Professional) 2014, for sharing with IT communities.

Posted in: A testers viewpoint, Big data, Business Intelligence, Developers, Internet of Things, IT strategy      
Comments: 0
Tags: , , , , , ,


4961686-Vector-Comic-Character-Girl-Stock-Vector-sad-cartoon-girlWhat I learnt from Katie…

I read this book a couple of months ago about a poor girl, Katie, who always had had bad luck with guys: one boyfriend borrowed all her money and disappeared to Mexico, another one two-timed her and her last couple used her to paint his apartment. She was so devastated that she thought she would never find anyone nice and decided to give up. But her best friend tried to stop her and gave her this advice: go to have lunch to a different place, somewhere completely different and maybe you’ll meet someone there, someone who could become the one. And so she did. She did something different and met someone completely opposite to the losers she dated before. This way she could finally be happy.

Being honest, this book is not a masterpiece, but it made me remember one of the seven testing principles according to ISTQB: the pesticide paradox. This principle states that if the same tests are repeated over and over again, they will no longer find any new defects. That is, if we always use the same test cases, the software will become immune to them, all the bugs that those tests would find will be exposed and we won’t be able to uncover new bugs.

If you want a different result, dare to be different…

If we expect to have a different result and maintain the effectiveness and efficacy of our test cases, they need to be regularly revised and updated, and

we have to write new and different tests in order to exercise different parts of the application to find more defects. And sometimes it is also important to identify those test cases that have failed to detect any defect and remove them from our test plan, so that it doesn’t grow to infinity and beyond and we don’t waste time running those ineffective test cases.

Just as developers make the code better when correcting bugs, testers have to make the test cases better. This way, we could find those resistant bugs against which our initial test cases are ineffective and those subtler bugs that have been introduced to the system as a result of the correction of the former ones.

Additionally, it is important not to rely solely on structured and formal test techniques, but to perform exploratory testing, error guessing or another informal approach to increase the chances of capturing new bugs. The most variety of test techniques and methods the better.

To wrap things up, we as testers, should remember that if we do the usual thing, we get the usual result. So, we have to act like Katie and do something different to have a different result. If we do this, we can rely more on our test results and assure that we have find the greatest possible number of defects (since it is very difficult to find all of them).

Image 1


To read the original post and add comments, please visit the SogetiLabs blog: The Pesticide Paradox in Testing: If you want a different result, do something different!

Related Posts:

  1. Modelization of Automated Regression Testing, the ART of testing
  2. Shakedown Testing! … What’s that?!
  3. What the *BLEEP* do we know about Software Testing?
  4. Everybody tests with Agile

AUTHOR: Paloma Rodriguez
Paloma Rodriguez has been Test Engineer for Sogeti Group since 2011. In this role, she manages testing projects and participates in various publications and training activities.

Posted in: A testers viewpoint, Behaviour Driven Development, Developers, Digital strategy, Exploratory Testing, functional testing, Internet of Things, IT strategy, Managed Testing, Technical Testing, test data management, Test Driven Development, Test environment, Test Methodologies, Test Plans, Testing and innovation      
Comments: 0
Tags: , , , , , ,