As an industry we have done a reasonable job (with some notable exceptions) of protecting our network perimeters from attack from the outside. But there is still a significant amount of work to be done about attacks from within. The old, trusted employee in the corner that keeps everything working until the day they disappear with the company’s bank balance has been around long enough to become a cliché and yet it still happens.

With BYOD (Bring Your Own Devices) and IoT( Internet of Things) expanding their size ten footprint with increasing force across work and home and the spread of these always on devices brings us to a much overlooked but very real and present danger. What the InfoSec professionals are referring to as the Felis Catus exploit.

The origins of the attack vector are unclear but it appears to have been a much overlooked and very persistent threat that has been lurking around for years.

Felis Catus is best known for its ability to infiltrate small spaces and can often be found attached to running servers and laptops where it can cause keyboard problems and lead to overheating.

We know back in the 1960s the first backdoor based around Felis Catus was implemented by the CIA. Accounts vary as to what happened and urban legend has the prototype being run over and killed. We do know that the programme shut down around about 1967.

Now instead of requiring a team of surgeons to put the equipment inside the cat, many pet owners have collars that can track and report back to base on what the cat has been and even what it has seen / heard with cameras and GPS. You can even get a nice set of bio-metrics embedded inside the cat.

Okay, this is an edge case. But it does show that we need to make sure that we are very careful with what devices (cats) we instrument and where we let them go. Today the cat might just be after some warmth. But how long before it comes with a keylogger?

So when did you last run a security scan on your cat?

Read more about Acoustic Kitty



Posted in: Digital, Internet of Things, IT strategy, Quality Assurance, Research, Security, Smart      
Comments: 0
Tags: , , , , , , ,


Getting Smarter

With Gartner predicting that the average home will have 500 smart devices by 2022 and a recent survey by Yale showing that security is the second most important driver for the connected home, after smart energy, it’s important to consider which parties in the development lifecycle and sales process are responsible for securing our smart homes. So, what are the smart home products and associated security risks; what are the challenges to addressing these risks and how can they be overcome; and who are the parties that can be held ultimately accountable?

Uncertain Behaviour

The areas that are most likely to grow quickly are safety, security and assisted living. Smart safety products include sensors to detect smoke, flooding or extreme temperatures. Security can be boosted by cameras, motion sensors, door and window sensors and alarms, electronic locks and panic buttons. The elderly and people with disabilities can gain more independence and a better quality of life by using fall detectors and wireless sensors to track their daily activity an sleep and ensure they are taking their medication in the right quaintly at the right times. As the market is so new there is little standardisation, customers are uncertain how to use and operate the products and developers and vendors are unsure as to how their customers will interact with their products.

Feeling Vulnerable

The 3 main areas of potential vulnerability are:

  1. The technical security of the devices and the Home Area Network on which they operate
  2. The connectivity from the customer’s home broadband to the cloud; and
  3. The customer’s login and password authentication process.

Ideally the smart devices should be connected to the cloud with proper SSL/TLS encryption of the standard required for online banking and eCommerce shops. The customer needs to be made fully aware of their own part in smart home security and ensure that their WiFi is secure and they use proper passwords for all associated online accounts.

Back in 2013, Forbes Journalist Kashmir Hill, proved how easily smart home devices can be hacked, simply by entering a few simple search terms into Google and discovering a list of homes and the smart devices installed in them. The main issue was that the manufacturer and vendor had not made usernames and passwords obligatory and there was no authentication between the handheld operating system and the device. Hill was therefore able to click on the devices online and operate them remotely, turning on lights and heating and viewing energy usage to estimate when people were at home or out – a scene straight out of Poltergeist and a dream scenario for hackers and robbers.

3 Challenges

Some of the main challenges to securing smart home devices are:

  1. Smart home devices have highly specialised functions so normal security solutions are insufficient.
  2. Security needs to be built into the device from the outset and can rarely be added at a later date.
  3. The end user can’t purchase and install a third party security upgrade as they might with a general purpose device. Only the original manufacturer can provide an upgrade

Security Testing

From a testing perspective, the World Quality Report reveals that many organisations are concerned that the sheer pace of release is a barrier to proper QA and security testing. We need to take a shift let approach to smart device security testing to ensure security is built in from the outset. However, with multi-party vendors contributing different device components, access can be difficult, while replication and creating live test environment is extremely costly and time consuming.  Proper threat modelling requires the ability to foresee all the ways in which a cybercriminal may seek to hack the device which can be difficult in an immature sector with a notable skills gap. The other issues facing the testing community is that, while taking responsibility for their own role in smart device security, they also need to impress upon everyone in the development, manufacturing and sales process that security is everyone’s responsibility, including the end users. These factors are drivers for higher level of test automation and industrialisation and the widespread adoption of built in predictive analytics to monitor user behaviour, gauge maintenance needs and vulnerabilities and provide predictive upgrades and maintenance on test benches to allow continuous testing and proper protection.

Responsible & Accountable

Arguably in light of all the factors we have discussed, the ultimate responsibility for smart home security lies with the Original Equipment Manufacturer (OEM) because the security measures need to be embedded into the design and tested iteratively. Also the OEM is responsible for selecting the OS and processor and determining security protocols and authentication. The Operating System vendor also has a vital role to play in ensuring the security of each of the individual parts they provide. The device Chip Vendor must crate processors with built-in code verification, encryption engines and detectors to report back when a device has been physically tampered with. It is also highly advisable to consult a specialised Internet of Things (IoT) security firm who can advice on best practice, emerging industry standards, a shift left approach to embedding security measures and the best way to inform the end user of the role they need to play in their own security. Where devices are provided over a network, the phone and broadband companies can actually play an even more essential role than the OEM as they have the finances and leverage to ensure that the OEMs opt for built-in security measures an into their own network designs.  All of these groups need to work together to ensure that the end user has access to clear information about how to use and secure the device and the end user must take responsibility for following these instructions to the letter.

So in answer to the question “who is responsible?” we all are. We must all play our parts diligently as failure to do so on behalf of any party involves means the whole security of the device can fail. It is only through collaboration, communication and interoperability that we can keep the ghost out of the machine.



Posted in: Automation Testing, Behaviour Driven Development, Cloud, Collaboration, communication, e-Commerce, Innovation, Internet of Things, IT Security, IT strategy, Research, Security, Smart, Social Aspects, Testing and innovation, User Experience, User Interface, World Quality Report      
Comments: 0
Tags: , , , , , , , , , , , , , , ,


Finding bugs and eating chocolate are 2 of our favourite things at Sogeti, so, as Easter is just around the corner, we’re holding a very special #EasterBugs competition from 21st-24th March 2016!

Yes, we’ve put lots of delicious chocolate bugs in a jar and we want you to guess how many there are!

The person who guesses the number closest to the actual number of bugs in the picture will win a copy of our new book “Testing in an IoT Environment”, which details all the components you need to apply more structured, high-tech testing to the Internet of Things.

Each day is a new competition, with a new photo of the jar, containing a few less bugs (which will no doubt have been eaten by the Sogeti marketing department!)

Here’s how to enter:

  1. Follow @UK_Sogeti on Twitter
  2. Retweet the #EasterBugs photo
  3. Tweet us how many bugs you think are in the photo with the hashtag #EasterBugs

You can enter as many times as you want each day and you can enter every day if you wish, to increase your chances of being a winner. The competition starts at 09.30 and finishes at 00:00am each day. The previous day’s winner will be announced at 09.00 the following morning. To enter you must not be an employee of the Capgemini group.

We have a few surprises and lots of fun planned along the way, so be sure to follow us on Twitter @UK_Sogeti to join in our holiday challenge.

Let’s be great testers and find all those bugs!

Happy Easter



Posted in: Infrastructure, Innovation, Internet of Things, Marketing, Social media      
Comments: 0
Tags: , , ,


Love is All Around

Love is in the air: it’s Valentine’s Day and a leap year, so we thought it would be fun to take a look at the technology and algorithms of internet dating. How scientific are they and do they work? How can we test the User Experience (UX)? And what happens when a customer hacks the system?

Dating sites and their algorithms can be divided into 4 broad categories, profile matching, online behaviour analysis, deep dive questionnaires and binary answers based on attraction!

Profile Matching Algorithms is built on the profile matching model. Users are only required to fill in their profile – questions about physical appearance, hobbies and preferences – rather than a deeper personality test or questionnaire. Match doesn’t offer a compatibility percentage, instead recommending profiles to users with a list of common interests. The Match algorithm goes a little further with some questions (such as smoking) taking into account whether a prospect’s answer is imperative or unimportant. The cleverest element to the Match algorithm is it also monitors users behaviour and sends you behaviour-driven profile recommendations, even where those users’ answers are contradictory to your stated preferences. This covers that love scenario that we all know so well when our mind is telling us one thing, while our heart is saying another!

Science & Extreme Testing

eHarmony, geared towards long term relationships and marriage, has the most extensive mandatory online personality questionnaire with 200 questions covering 29 lifestyle areas. Their USP is that users can’t browse one another’s profiles; they can only see prospects recommended by the site based on 3 years of research on 5,000 married couples. The downside is that it takes physical attraction out of the equation as users can’t select people based on their physical appearance. Their algorithm is based on 6 factors:

  1. Level of agreeableness
  2. Preference for emotional intimacy
  3. Degree of sexual and romantic passion
  4. Level of openness
  5. Spirituality
  6. Happiness & Optimism

Users are then matched on a points system that looks at both users scores’ over a range of questions and how they arrived at that score. So, on a scale of 1-7, if their combined points for an answer equal 8, they are a match if their scores were 4 and 4 than if their scores were 7 and 1.  eHamony’s algorithm has been dubbed the “scientific approach”. Interestingly matches are made by scoring users on how similar their answers are – the potential downfall being if opposites really do attract and the fact that 2 extroverts may not make the best match!

At the 2013 Society for Personality and Social Psychology AGM, critics said that only way to test the efficacy of the eHarmony algorithm would be to run a randomised clinical trial with a control group, a 2nd group matched according to the algorithm and a 3rd group arbitrarily matched, and track their relationships over time to see who was happiest. The problem with this extreme version of UX /CX testing would of course be the numerous potential breaches of users’ human and other non-waivable legal rights, allegations of fraud, damages sought for emotional distress, the questionable morality of the whole concept and the potential reputational damage to the brand!

Deep Dive

OKCupid is definitely one of the most interesting sites, with the most accessible user interface and exciting UX. In addition to creating a detailed profile, users are invited to answer c2,000 questions about their religious and political beliefs, hobbies, daily habits and sexual preferences. The questions range from mundane to bizarre, titillating and bordering on scandalous! The algorithm also appears to be one of the most complex, taking into account 3 variables for every question:

  1. Your answer
  2. What answer you would accept from a match
  3. How important is it?

The algorithm attributes a numerical value to the level of importance. These are as follows:

  • Of Little Importance 1
  • Somewhat Important is 10
  • Very Important 50

It then multiplies the 2 users scores and takes the Nth square route where N is the number of questions answered. Users are then presented with the profiles of their highest matches, but are also free to stipulate search terms and browse everyone’s profiles. From a UX perspective the differentiator is you don’t just get matched up, you can read everyone’s answers to these revealing questions when deciding your love match!

Hot or Not?

Tinder caters for people who don’t want complicated algorithms, they simply want to know who’s hot and nearby. When asked what they hoped to learn about their customers from data mining and testing, CMO Justin Mateen said he was most keen to discover “the number of matches that a user needs over a period of time before they’re addicted to the product”.

OKC Gets Experimental

Writing on the OKCTrends blog founder Christian Rudder says “we might be popular, we might create a lot of great relationships…but OkCupid doesn’t really know what it’s doing. Neither does any other website….experiments are how you sort all this out.” So how does OKC test UX?

Blind Date

When OKC launched a separate blind date app they decided to test this new UX on OKC and make the site blind by hiding everyone’s profile picture. All metrics were way down for a comparative Tuesday but the results were fascinating – people responded to messages 44% more often, had more in depth conversations and exchanged contact details more quickly! When they put the images back up at 4pm the majority of the 2,200 people who had been talking blind failed to stay in contact, proving that physical attraction is an online dating priority. The new blind date app user feedback data showed that both sexes had a good time regardless of how good looking their partner was.

Rate Your Mate

OKC also ran A/B tests on the system customers’ use to rate one another’s profiles. They changed their system from 2 scales, “looks” and “personality”, to a single scale of whether the profile was appealing or not. Then they performed split tests: half the time when they showed their profile they hid the text and just displayed the image and the other half they showed both, giving 2 independent sets of scores for each profile. The results showed that the profile text has less than 10% impact on the rating! Finally Rudder wanted to test whether the OKC percentage matching is actually accurate or if people simply liked one another because of the power of suggestion. He paired profiles with a 30% match and told them that they were in fact a 90% match (with a full reveal to users after the testing). As expected, users sent more initial messages to prospects with a higher match percentage.

Suggestive Behaviour

OKC then took it further to see if the power of suggestion could lead people to really like one another by seeing how increasing the match percentage increased the likelihood of a proper conversation (4 messages). Sure enough the odds of a conversation went up the higher the perceived match percentage. So did this mean that the OKC algorithm was nonsense? They decided to test in the opposite direction and tell users who had a high match that their match percentage was very low. Those who genuinely matched at 90%, and were told their true match, were the most likely to have a proper conversation, as expected, but the remaining results showed that the OKC algorithms were doing a pretty good job but that also suggestion is powerful and the likelihood of conversation still rose and in only slightly smaller amounts when people perceived they had a high match but actually had a low one.

These tests enable OKC to employ test driven and behaviour driven development and are the secret behind the fact that OKC users are devoted fans who find the whole UX way more fun than other dating sites.

Love Hacking – The Customer as Tester

Some of the recent online articles reviewing Gartner’s report on Bimodal IT asked if the Fluid IT model would go so far with Test Driven and Behaviour Driven development as to let the customer evolve the product themselves. With this in mind no discussion about testing dating sites is complete without a mention of Los Angeles mathematician Chris McKinlay who, in a twist on CX testing, hacked OKC’s algorithms to find the girl of his dreams.

According to an interview in in 2014, around 20,000 local women were using OKC but McKinlay found that he was receiving less than 100 profiles with a 90%+ compatibility and that his profile was therefore practically invisible. He decided to solve his dating problems mathematically and devised a love-hack solution. Using Python scripts he mined data from hundreds of OKC questions and divided his potential matches into 7 categories and set up 12 fake accounts answer questions randomly and mine data from the profiles of women in each demographic who had answered the same questions. He even got around OKC security by installing spyware on his computer and monitoring his friend’s use of the site so he could mimic it and avoid his bots getting spotted. In 3 weeks he had mined 6 million answers from 20,000 women in the categories, analysed the data and decided which 2 categories were right for him. Finally he set up a real profile and honestly answered the 500 OKC questions that mattered most to him, leaving his computer to use the data to determine how much importance to attach to each questions as per the OKC algorithm. When this was complete and he sorted the results according to match percentage he found that he had over 10,000 90%+ matches!

Finally to raise his visibility he wrote a program to visit the pages of all his 90% matches and messages from women in his chosen categories started to flood in! McKinlay had achieved his goal of getting higher visibility on the site and increasing his introductions to more suitable women. Back in reality he went on 87 not so great first dates before he finally met Christine Tien Wang and married her a year later. If his data mining and testing proved anything, it’s that Maths may well get you a date, but only Chemistry and Biology will get you a relationship.



Posted in: Behaviour Driven Development, Biology, communication, component testing, Human Behaviour, Human Interaction Testing, Innovation, Internet of Things, IT Security, IT strategy, mobile applications, mobile testing, Quality Assurance, Research, Social Aspects, Socio-technical systems, Software Development, test framework, User Experience, User Interface      
Comments: 0
Tags: , , , , , , , , , , , , , , ,


Apple Pay is a pure-play NFC-payment system. It uses the Near Field Communication chip in specific Apple devices to make a contactless payment. Technically it will work in stores that accept contactless card payments, although it will also require the vendor to have signed up.

The Touch ID fingerprint scanner is key to the whole thing, but you will also need a specific NFC antenna that is built into certain Apple devices.

If the shop you’re in supports Apple Pay, they will have a little sensor by the till. You put your iPhone on the sensor, put your finger on the Touch ID fingerprint scanner to identify yourself, and that’s it. There are NFC antennae in the iPhone 6 and iPhone 6 Plus, but not in any earlier iPhones. There are also NFC chips in the iPad Air 2 and iPad mini 3, but they appear to be deactivated for the time being; no iPad is able to use the full, in-store version of Apple Pay.

The Samsung Galaxy S6 and S6 Edge both have NFC for use with Samsung Pay, but there’s a key technical difference between Samsung Pay and Apple Pay. Rather than limiting transactions exclusively to NFC, Samsung Pay will also work with magnetic strip readers – a much older technology which is available in virtually every shop that accepts card payments. Samsung is doing this with a new proprietary technology called Magnetic Secure Transmission, or MST for short.

The key to leading the nascent mobile payment market lay in how to guarantee “wider usage,” and Samsung’s use of magnetic secure transmission (MST) technology would give the company a significant advantage over its two main rivals, Apple Pay and Android Pay.

MST enables users to make payments in places where traditional magnetic-strip credit cards are accepted. Samsung’s two main rivals have yet to use the technology.

MST technology would give Samsung momentum against Apple Pay and Android Pay, both of which used near field communication (NFC) technology, which is not widely used in everydaylife.

Security is a key issue determining the success or failure of mobile payment business, Samsung Pay uses tokens and fingerprints to guarantee the same level of security as its rivals.”

“Samsung Pay offers more comprehensive options by using both NFC and MST technology,” Lee said.

Samsung Pay vs Apple Pay: verdict

Samsung Electronics’ mobile payment system does have the edge as Samsung Pay is expected to dominate the mobile payment market because the system is the most compatible with merchants’ terminals, because it utilises Magnetic Secure Transmission.

At launch, Samsung Pay is only available in the company’s home country Korea. But it will expand to the US on 28 September 2015, and the firm has indicated that the UK, Spain and China will be next to get the facility sometime in the near-future.

That signals a more aggressive rollout than Apple Pay, which currently remains limited to the US and UK.

The big question for users is Samsung Pay or Apple Pay? It’s unlikely that many people will be swayed from Apple’s iOS ecosystem solely because of the type of mobile payments Samsung supports.

Perhaps the more important comparison is with Android Pay – Google’s forthcoming mobile wallet service.

Google is also pitching its service as simple to use because it doesn’t need a special app to be launched. But it will require payment terminals to offer NFC support.


Posted in: Business Intelligence, communication, component testing, Developers, e-Commerce, functional testing, Human Interaction Testing, Innovation, IT Security, IT strategy, Marketing, mobile applications, mobile testing, Open Innovation, Rapid Application Development, Research, Security, User Experience, User Interface      
Comments: 0
Tags: , , , , , , , , , , , , , , , , , ,