SOGETI UK BLOG

Software ArchitectWe are in the age of Digital Transformation. The concept is everywhere, in all discussions… but why ? Mainly because all companies need to innovate, transform their business model to stay in the competition in a rapidly changing world, fostered by fast-changing technologies. Companies need to provide more and more business value for a Business to Business to Customer/Consumer(B2B2C) ecosystem!

But in this situation, what is the role of an architect? Has it changed? Yes, for sure! Now, the architect’s role needs to adapt to this new model, with respect to business, organisation and governance. The architect should no longer be responsible only for defining how to use one or another technology to build solutions. The architect’s role, in the digital transformation era, is to define WHAT solution to build and WHY, and HOW will that specific solution offer business value to the whole ecosystem of the company, for the given business model.

The architect’s role has become a key role in organisations, just like the new Chief Digital Officer’s role. The architect needs to build the bridge connecting the business, the operations and the IT! He/she will increasingly define which ‘off-the-shelf’ components to integrate and how to build the new business solution that will push the new business ecosystem – named B2B2C.

The other big concern for architects is the optimisation of the time spent to realise all tasks to build the solution. So, the architect’s role has become as important as the project manager’s role. He/she is the team leader that develops the vision and builds the digital strategy for the company – the strategy that will provide a business advantage to the company to win the competition. This means that the architect has to understand the business model and the impact of his/her solution on the business model!

As an architect, I’ve taken up this evolved role as a great challenge!

To read the original post and add comments, please visit the SogetiLabs blog: How is an Architect’s role changing in the Digital Transformation era?

Related Posts:

  1. How CESCE reinvented itself: A Digital Transformation success story
  2. Digital convergence requires transformation and omnichannel solutions
  3. Key themes for QA & Testing organizations to focus on during Digital Transformation
  4. Quality-driven Digital Transformation

AUTHOR: Olivier Pierrat
Olivier Pierrat joins Sogeti in February 2011, as ADMS Practice Leader, for the East Business Unit. In this role, he is responsible of deploying the Digital convergence offering portfolio, especially the Collaborative and Connected customer offers. He’s also Innovation leader for the East BU. More on Olivier

Posted in: Business Intelligence, Digital strategy, Innovation, Technology Outlook, Testing and innovation, Transformation      
Comments: 0
Tags: , , , , ,

 

The term ‘Big Data’ has been hyped in the recent years and Data Mining techniques are now being used in many areas. This leads to a growing demand for Data Scientists, who can turn data into value. DImage credit: eminenture.comata Science aims to use the different available data sources to answer four questions:

What happened? -> Report
Why did it happen? -> Diagnose
What will happen? -> Predict
What is the best that can happen? -> Recommend

These four questions are key to understand and improve business processes. Data mining techniques applied to business process analysis (aka process mining) aims to discover, monitor and improve real processes by extracting knowledge from data readily available in today’s information systems. The goal is not to analyse data, but to improve processes for your business.

Starting point for process mining is an event log. Each event in such a log refers to an activity (i.e., a well-defined step in some process) and is related to a particular case (i.e., a process instance). The events belonging to a case are ordered and can be seen as one ‘run’ of the process. Event logs may store additional information about events such as the resource (i.e. person or device) executing or initiating the activity, the timestamp of the event, or data elements recorded with the event (e.g. unit cost).

Process mining bridges the gap between traditional model-based process analysis (e.g. simulation and other business process management techniques) and data-centric analysis techniques such as machine learning and data mining. Process mining compares event data (i.e. observed behavior) and process models (hand-made or discovered automatically).

There are three main types of process mining [see (Ref. 1)]:

  • Discovery: A discovery technique takes an event log and produces a process model without using any a-priori information.
  • Conformance: An existing process model is compared with an event log of the same process.
  • Enhancement: The objective is to extend or improve an existing process model using information about the actual process recorded in the event log, for example the extension of a process model with performance information, e.g., showing bottlenecks.

Finally yet importantly, process mining techniques can be used in an online setting known as operational support. An example is the detection of non-conformance at the moment the deviation actually takes place.

These relatively new techniques are applied in several business sectors, private or public, and deliver valuable insights where businesses are under pressure for effectiveness and quality [e.g. Healthcare sector, see (Ref. 2)]

Process mining is a new way of looking at your business processes!

References

(1) Wil M.P. van der Aalst, Process Mining :  Discovery, Conformance and Enhancement of Business Processes, ISBN 978-3-642-19344-6, Springer-Verlag, 2011

(2) Ronny S. Mans • Wil M.P. van der Aalst, Rob J.B. Vanwersch – Process Mining in Healthcare: Evaluating and Exploiting Operational Healthcare Processes, ISSN 2197-9618,  SpringerBriefs in Business Process Management, 2015

To read the original post and add comments, please visit the SogetiLabs blog: Data Mining: How to use business data to improve business processes?

Related Posts:

  1. NSA “big data spin-off” company releases data mining tool
  2. Big Data & Analytics in 2015: Development of open innovation platform to drive business results
  3. Mining the web to predict the future: what about ‘long data’?
  4. How do Big Data Pioneers Organize Data Management and IT processes?

Philippe Andre AUTHOR: Philippe Andre
Philippe André is an expert within Business and IS architecture, Service Architecture, System modelling and Soil science. Philippe is a Certified Enterprise Architect (L4) and TOGAF9 certified. Philippe’s mission is to help clients to make the best decision as far as business and IT alignment is concerned. He works as a link between architecture and design team, making sure that architecture decisions and directions are applied on the field.

Posted in: Behaviour Driven Development, Big data, Business Intelligence, communication, Developers, Digital, Digital strategy, IT strategy, project management, test data management      
Comments: 0
Tags: , , , , , , , ,

 

cloud-securityThe great benefits of the Cloud are its flexibility, on-demand availability, cost effectiveness, scalability and the way it enables a more agile approach to working. When you’re considering your cloud security strategy, you need to ensure that it reflects these characteristics to be truly effective. Maintaining security in the Cloud also necessitates a shared responsibility between Cloud Service Providers and their clients. As it’s impossible for clients to simply walk into a supplier’s datacentre to implement security measures; you need to use tools such as guest operating system firewalls, Virtual Network Gateway configuration, and Virtual Private Networks to secure your estate.  Only by working together can you ensure that your applications and data are protected, the required compliance regulations are met and maximum levels of business continuity are achieved. It’s essential to take each of the different aspects of Cloud deployment, physical infrastructure, network infrastructure, virtualisation layer, operating system, applications and data to determine which security measures fall within the remit of the providers and which need to be dealt with directly by the client.

It’s crucial to choose a provider with a trusted Cloud infrastructure and a dynamic security strategy with a combination of access controls, authentication and encryption, firewalls and logical isolation. It’s necessary to design, create and manage your own applications and additional infrastructure in the Cloud … safe with the knowledge that they are as secure as possible from malware attacks, zero-day vulnerabilities and data breaches. It’s also highly recommended to choose a provider that undergoes regular third party audits to ensure that security measures adhere to industry standard frameworks, and be innovative and find a good balance between provider and client ownership and accountability.

Microsoft’s White Paper on Azure Network Security, is an interesting example of a powerful shared responsibility security strategy. Azure uses a distributed virtual firewall for the secure, logical isolation of customer infrastructure on a public cloud, balanced with the client deploying multiple logically isolated deployment and virtual networks according to business requirements. Azure’s internet communication security is very high, disallowing any inbound traffic but allowing client administrators to enable communication with a choice of three different techniques via defining input end points, delineating Azure Security Groups or through a public IP address. The White Paper gives full details of securing all the different types of communication that you might require, including:

  • Securing communications among VMs inside the private network
  • Securing inbound communications from the Internet
  • Securing communications across multiple subscriptions
  • Securing communications to on-premises networks with Internal or Public Facing Multi-Tier Application

Security Management and Threat Defence are also explored in detail. Administrators can create a VM using either the Azure Management Portal or Windows PowerShell, both of which have in-built security measures. The first assigns random port numbers to reduce the chances of a password dictionary attack and the second is needed for remote ports to be explicitly opened. Again, these strong measures can be minimised by client administrators; and Microsoft gives good advice on how this can be achieved.

Azure offers a continuous monitoring service with a distributed denial-of-service (DDoS) defence system, which is continually improved through penetration-testing.  Although not mentioned in the White Paper in detail, it’s worth noting that Microsoft conducts regular penetration testing and also allows customers to carry out their own pre-authorised penetration testing. Network Security Groups are used to isolate VMs within a virtual network for in- depth defence and to control inbound and outbound internet traffic. Microsoft’s guidelines for Virtual Machines and Virtual Networks also apply to securing Azure Cloud Services. There have been further improvements to MS Azure’s Network Security since the  3rd version of the White Paper was released in February 2014. The most notable improvements were noticed since October 2014, when MS announced the general release of Network Security Groups with easier subnet isolation in multi-tier topologies, simpler policy validation and compliance with site to site forced tunnelling and VPN support for Perfect Forward Secrecy.

Regardless of whether you decide to use Azure or not, the White Paper is worth a read as a good overview of how a strong cloud security strategy divides responsibility between the provider and the client.

To read the original post and add comments, please visit the SogetiLabs blog: Cloud Security is a Shared Responsibility

Related Posts:

  1. The Winds of Change in Cloud Security
  2. It’s the platform, stupid
  3. Hybrid Cloud, Hybrid clients…hybrid solutions!
  4. Cloud usage flavors for Development and Test teams

Kevin Whitehorn AUTHOR: Kevin Whitehorn
Kevin is Head of Delivery for all Infrastructural and Developmental engagements with Sogeti clients in the UK. The engagements he looks after range from Desktop Transformation, Hybrid Cloud implementations and Application Portfolio Refreshes, to the introduction of fully Managed Services.

Posted in: Azure, Cloud, communication, Microsoft, Point Zero, privacy, Security      
Comments: 0
Tags: , , , , , , , , , , , , , ,

 

mobile-web-app-testingAn opportunity must be closed and must always have an order number. Whenever a lead is created, an email address must be provided. If the client has a UK address, then the account must always be handled by the UK Sales Office.

These are classic examples of scenarios where you may choose to use Validation Rules in Salesforce. In short, validation rules ensure that your data in Salesforce meets the necessary requirements. It can be anything from business requirements to the requirements for your integrations.

The 7 Good Practices:
  1. Never, ever create validation rules directly in the Production environment
  2. Run rules as narrowly as possible
  3. Remember to notify users when there are new rules
  4. Be careful not to create too many rules
  5. NEVER use specific IDs in a validation rule
  6. Please be aware of the difference between layouts and validation rules
  7. Do Testing, Testing and more Testing

Below, each practice is explained in detail:

Never, ever create validation rules directly in the Production environment

Salesforce is really easy to modify and edit in your production environment. If the sales manager or anybody from another part of the business just comes by and says: “Can you make sure that the contact’s title is always filled in?” An administrator can very quickly create a validation rule to make the manager happy… and potentially kill an integration or create errors for the next deployment.

It may sound crazy, but it can be anything from a test script that doesn’t fill in roles when it runs (and thereby prevents deployment), or it can be an integration, which does not have the data, etc.

It will take a bit longer, but always create your validation rule in a sandbox. Then test it out to make sure it does what it should, before deploying it to production. This will ensure that all the tests are run, and consequently, errors (if any) are detected before it is deployed to production.

(Bonus info: you can run all tests from your sandbox to catch any errors before you deploy)

Run rules as narrowly as possible

This rule is probably the one that gets overlooked most frequently. Take the example of the order number. This becomes a requirement one year after your initial Salesforce implementation, as you (now) want to integrate Salesforce with your financial system. To do this, it must always have an order number.

Such a validation rule could look like this:

And (
StageName = ‘ Closed ‘,
ISBLANK (OrderNumber)
)

The above means that, if  the “Stage” is closed, there must be an order number. However, this means that data-wise, you will have data for an entire year, which will never live up to the new standards. If you, all of a sudden, need to update all opportunities, all updates for historical data will fail. Here, it is better to narrow down the trigger criteria for: when the validation should kick in. With the above example, and a minor change, it is possible to avoid such an error. It will look like this:

And (
ISCHANGED (StageName),
StageName = ‘ Closed ‘,
ISBLANK (OrderNumber)
)

The rule, now, checks if there is an order number, when you close the sale. This means, you cannot close a sale without an order number, but, all previous closed sales can easily exist without it.

Therefore, run your validation rules as narrowly or as rarely as possible / acceptable for the desired result.

Remember to notify users when there are new rules

Again, the same scenario… there should be an order number for closed sales. If Advice #1 and #2 are followed, and it just gets deployed into production, there is a real risk of confusing the end-users. They will operate under business-as-usual activities, and then all of a sudden, they will get an error. They would not be able to complete the process as they are used to. This can create unnecessary frustrations (for the end users) with the process, system and possibly the IT Department.

An announcement on Friday, stating that from Monday onward they must fill in these new fields, can do miracles to reduce frustration. This further strengthens the argument to start in the sandbox and then deploy to production, so you ensure that everything is ready for Monday morning, and you would simply need to deploy. There are many alternatives to an announcement – it can be presented at a team meeting or by their immediate superiors. What matters is that it must be communicated.

Be careful not to create too many rules

This is a risk, which sometimes occurs out of the blue. First you need the ‘city’ field to be filled in, then ‘Reason for Lost’ and then there’s another rule, and another, and so on.

It can result in a process, which is so slow and rigid that it demotivates the end user. Therefore, always ask: Is the rule a “need to have” or just “nice to have”?

NEVER use IDs in a validation rule

This is often seen when using record types, where you simply use their ID. This can cause a bit of a headache across environments, because the IDs change. Take, for example, record types – use their Developer Name instead – this stays the same. Similarly, if it is possible to use a custom label, rather than a text string – do it – or to use Custom Permissions instead of profiles and permissions sets. These are better alternatives, which make the configuration more robust, and helps ensure that your rules work when you deploy from one environment to another.

Please be aware of the difference between layouts and validation rules

You can make a field mandatory either via a page layout or a validation rule. If a field is always required, this can be done via page layouts, but if it is only required under certain criteria, then setting a validation rule is the option at hand. Similarly, you can insert data into the organisation, using a data loader and this will succeed if it is only required via a page layout; where as, it will fail when it is required via a validation rule, and the criteria are met.

Testing, Testing and more Testing

In the wake of the first advice, it cannot be stressed enough how important testing is. It may well be that you only enforced validation on a single field, and you see that it is doing exactly as you expect. However, at times it may be a good idea to try to go through the whole process for the object in question. This will help ensure that we do not or have not destroyed any existing automation or do not stop another process that touches the same object. Repeated testing can save a lot of time and resources in the end.

To read the original post and add comments, please visit the SogetiLabs blog: Seven Best Practices for Salesforce Validation Rules

Related Posts:

  1. Why Salesforce joining the Internet of Things is a Big Deal
  2. What are the best practices for improving the quality of an application?
  3. Why not document software development projects?
  4. Modelization of Automated Regression Testing, the ART of testing

Kenneth Wagner AUTHOR: Kenneth Wagner
Kenneth Wagner has been with CapGemini/Sogeti since June 2014, as a Salesforce consultant. He met clients on his first day with the group and have quickly solidified his raison d’être within both the group and its clients. Prior to CapGemini/Sogeti, he served as Sales Operations Manager with an international SaaS Company with HQ in Stockholm. He was responsible for all Sales Related analytics/Business Intelligence; he managed and improved upon all of their Sales & Marketing System and led training at all the offices. Prior to that role he worked the floor as sales – using the systems that would later become his career. Kenneth has impressed industry peers at many opportunities, being branded as a true Cloud Evangelist and a talent for spotting the intersections between business and technology. He sets the bar high, and works tirelessly to make a difference in the ventures he is engaged in.

Posted in: Behaviour Driven Development, communication, Developers, e-Commerce, Opinion, Usability Testing, User Experience, User Interface      
Comments: 0
Tags: , , , , , ,

 

Image credit: www.mylifeinthetrenches.comRecently, I was reading an article, [published in one of the Netherlands’ popular newspapers Kalshoven, 2014] about switching from the ‘aftercare’  to the ‘precare’ mode of delivering services/taking actions in our economy, government and in our life.

It referred to the trend that firefighters started. Instead of  focusing only on fighting actual fires, they also focus on how to prevent fires. The reason is that it costs less money to make people aware of fire hazards and take precautionary measures, than it costs to fight the actual fire and repair all the damages after the incident. Over the last 10 years, Dutch firefighters have been able to reduce the number of fires by one third by adopting such an approach.

The same is happening in the Netherlands’ healthcare sector. More and more insurance companies are focusing on preventing health issues than just curing it. Again because, preventive actions, eventually, cost less money.

The Bottom line is: Investing in fire-safe environments prevents high costs in the future.

PMO - quote

It was ironical that in the same week (when I read the article), my manager called me a firefighter, because I spent the whole week dousing little fires. Then I wondered, can a PMO (Project, Program of Portfolio Office) perhaps prevent project-fires as well? Is that how a PMO can add/bring the best value to any business?

Until now, most PMOs functioned in a reactive fashion… i.e. assisting the P-manager in what he or she needs. This automatically leads to questions, discussions and search for the added value that a PMO can bring.

I think we would see the difference, if this aspect – changing from being reactive to proactive – is factored in when restating the activities of a PMO. Hopefully, this article will inspire both officers and P-managers to redefine the position and tasks of a PMO and actually state a strong business case for them.

From a firefighter to a PMO

As stated in our previous blog post, we see PMOs playing a vital role in reaching your strategic goals by aligning and implementing tools, people and processes. This requires PMOs to have a proactive mindset.

The office needs to actively provide information and advices to people. It needs to challenge each manager and part of the (change) organisation with how and what they are doing and can do.

Proactive efforts from PMOs can be expected in services like benefits management, reporting and monitoring, consultancy and knowledge management.

But, perhaps, the largest transformation that needs to happen is within the mindset of the office itself. So, step out!

PMO Business Case

Still, stating a business case for a PMO remains difficult. I strongly believe that this has to do with the fact that a solid and well implemented PMO puts a lot of effort in prevention.

There is little to no direct causal link between prevention and the fires not happening…. It’s quite hard to measure something, which isn’t there. Therefore, measuring the effectiveness of a PMO is hard, though not impossible. What is vital is that you state what you want to achieve with the PMO and state solid Key Performance Indicators. Measure your performance before and during working with a PMO.

All these, together, will make sure that you are taking the right actions and doing them right!

That’s how we can facilitate in implementing your strategic goals.

Reference

Kalshoven, F. (2014, 12 27). Het spel en de knikkers. Van een nazorg naar een voorzorg economie. . de Volkskrant.

To read the original post and add comments, please visit the SogetiLabs blog: PMOs: Change mindset, Adopt the firefighter approach

Related Posts:

  1. Digital Usability (Part 1): Discard Standard Systems, Adopt Portal Approach
  2. How does coaching work in a business approach?
  3. Results from our Executive Summit: CIO’s see big data as a slow but fundamental business change
  4. The Winds of Change in Cloud Security

 

Sandra Rijswijk AUTHOR: Sandra Rijswijk
Sandra Rijswijk is currently working as manager of Sogeti PMO. She has been working for Sogeti since 2008. She worked for several customers in several industries in roles as Information Security Officer, Delivery Manager and PMO-Consultant and Program Officer.

Posted in: Behaviour Driven Development, Big data, Business Intelligence, communication, Developers, Digital, IT strategy, Managed Testing      
Comments: 0
Tags: , , , , , , , , ,

 

Your business is probably already reaping the benefits of Cloud for storing data, running existing applications and hosting environments – perhaps you’ve even started to create re-imagined cloud-native applications?  If you’ve made any foray into Cloud (according to RIFT’s 2014 Future of Cloud Computing  4th Annual Survey results, around  70% of businesses are intending to by the end of 2016), then you’re probably also encountering various issues and wondering how best to mitigate risks that you may not have foreseen at the outset of your Cloud journey.  As Gartner observes on the Trending Topics section of its website:  “Cloud computing is a disruptive phenomenon with the potential to make IT organisations more responsive than ever.”

Fast, agile, flexible, scalable, on demand; the Cloud enables rapid innovation without many of the usual IT restrictions. However, its recent widespread adoption has led to some businesses voicing concerns over issues such as cloud sprawl, cybersecurity, confusion over data ownership, inconsistent Cloud strategies and a lack of overall Cloud control. As with any technology, there are always going to be risks. The question is whether they are at an identifiable, acceptable and manageable level for your business, and if they are outweighed by the benefits.

Multiplying Clouds

One of the main benefits of Cloud is its cost-effectiveness. PostNL, the Netherlands’ leading mail company, recently partnered with Sogeti to adopt innovative Cloud-based solutions and is now on track to reduce the total cost of IT by more than 30% within 12 months.

However, in some organisations, these savings are almost nullified by IT consumerisation and the onslaught of shadow and stealth IT Cloud Sprawl, as employees sign up to unsanctioned Cloud services that aren’t part of the approved internal IT strategy.  When this is coupled with the trend for more secure hybrid Cloud solutions we often see several Cloud environments in one enterprise, leading to very real concerns over cost and sprawl. There are several effective ways to prevent and address these issues:

  1. Strategy - Devise a Cloud strategy that clearly details the number of systems and servers and allows for scalability in both directions. Then you have a baseline and can devise a “what if” analysis, so that you can always pull it back to the core requirements if you start to see unmanageable sprawl. Bear in mind any foreseeable periods of increased activity such as spikes or offers or product launches that you know are in the marketing and sales pipeline.
  2. Change Management – Migrating to Cloud is a big business change so you need to have a clear change management strategy in place and an easy to use change platform that enables you to track your Cloud progress, manage permissions, raise visibility and avoid sprawl.
  3. Tool Selection – When selecting Cloud management tools ensure that they are customisable, easily configurable and not specifically associated with any of your underlying systems.
  4. Training – Training is of paramount importance to ensure that all staff coming into contact with Cloud from daily use to decision making, have a good knowledge of how all the systems work and what the business requirements, strategy and change management processes are.

Cloud Service Brokers

Another way to manage Cloud sprawl (and also multiple service providers) is to use a Cloud Service Broker (CSB). Papermill, print and digital company Mohawk Fine Papers uses a single CSB to address their on-premise app to app integration, the supply integration for their 300 customers and the tech support and commercial functions associated with working with 3rd party Cloud providers. Similarly Men’s Warehouse uses a hybrid Cloud contact verification service linked to their major business processes, to maintain data integrity and quality. As Gartner is quick to point out in their case study, a CSB is not the answer to everything Cloud. Even with their CSB in place, Men’s Warehouse had instances of slow email identification caused by one of their 3rd party providers having performance issues. The key takeaways here are, when selecting CSBs to manage your cloud, ensure that you know what their own supply chain is like and the quality of their providers so you don’t get a knock on effect from external problems.

Data on the Move

Moving your data from one Cloud to another remains problematic due to varying formats and storage options, plus unique application services that are created to run in a specific Cloud environment. The key to preventing this issue is forward panning to ensure that you know in advance what types of Cloud solution you’ll require and create a data management strategy that is applicable in all environments.  This consistency will give you greater freedom to move your data and mitigate the risks. App containerisation enabling a higher degree of automation and moveability and a more streamlined deployment of resources is the key to success. Your choice of data transport technology will also have a major impact, so choose carefully taking into account optimum efficiency for storage and sufficient bandwidth. This is particularly important of you have a hybrid private-public cloud solution, which the majority of businesses are currently using so they can manage their most sensitive data internally.

If you’re one of the 49% of businesses that cites Cloud security as a major concern then have a look here at this post by Sogeti’s Kevin Whitehorn for tips on developing a solid security strategy and viewing your Cloud migration as an opportunity to enhance your security enterprise wide.

Sogeti’s Cloud Solutions

Sogeti – Capgemini’s local professional services arm – views Cloud as a ubiquitous design principle to everything we deliver, underpinning our end-to-end portfolio of client solutions. We understand that one size doesn’t fit all, and we can help you integrate cloud and non-cloud systems, break the barriers between your internal organisation and your external supplier and navigate a growing customer and partner ecosystem. Sogeti can design, build and manage the Cloud that fits your organisation. Our Cloud services include Cloud Advisory Services, Cloud-based Development & Testing – OneShare, and more comprehensive Cloud Transformation Services.

Darren Coupland AUTHOR: Darren Coupland
Darren is the sector head of telecommunications at Sogeti.

Posted in: Apps, Behaviour Driven Development, Big data, Business Intelligence, Cloud, e-Commerce, Enterprise Architecture, IT strategy, privacy, Research, Risk, Risk-based testing, Security      
Comments: 0
Tags: , , , , , , , , , , , , , , , , , , , , , , ,

 

WebDriver is a recent addition to Selenium’s suite of tools for web and mobile app test automation.  As with all tools, it does require some set up time and effort. Depending on the project situation this can be quite complicated: which browser will be used? On which OS will the test be running? Which IDE and programming language will be used? You have to answer all these questions.

Sogeti Germany has been busy building a framework that will help you to answer some these questions.

The framework helps to reduce the time needed to get WebDriver up and running to a minimum and, when in place, most of the common browsers can be used in parallel. The framework makes it possible to setup a Selenium Grid in circa 2 hours and have your first test case up and running on it. You can even do this with mobile apps.

During the development of the framework, we tried to make it compatible with all kinds of different websites and, at the same time, ensure it was still possible to easily extend or modify the framework to meet the project needs. Most of the actions a normal user would make are already implemented and can be used.

Advantages of this framework:

  • Completely written in Java and therefore you can integrate it completely into your own java project.
  • Easy to install
  • Easy to adopt to different kind of webprojects, e.g. highly dynamic sites, where the content is changed all the time
  • Easy to extend to your own specific project needs
  • Complete dependency management done by Apaches Maven. Maven takes care of every (third-party) dependency in your project (https://maven.apache.org/ and http://search.maven.org/).

Possible extensions:

  • Connect it to HP ALM (using third party drivers)
  • Use of cucumber (BDD)
  • Object Repository
  • Connection to MS Excel
  • Transfer available to other languages like C#, Ruby, …

This framework was introduced at one of our clients in Germany and  is currently being adopted and reworked for another client who’s using C# and SpecFlow (Cucumber for.Net) for developing its application, also in Germany.By using this framework you can reduce the setup time massively and have more time for the important things, like testing.

We are constantly working on improving the framework to make it even more powerful.

If you are interested in this Selenium framework, feel free to contect us at kontakt@sogeti.de

AUTHOR: Christoph Groebke

Posted in: A testers viewpoint, Developers, mobile applications, mobile testing, Test Automation      
Comments: 0
Tags: , , , , , , , , , , ,

 

The Trouble with Twitchy Customers

Last month’s hacking of the Amazon-owned gaming company, Twitch, made big news. The events that happened next were worrying. At first Twitch sought to up security, in part by raising the number of characters required for a customer password, and then lowering it again in response to angry objections from customers who said it made logging in too difficult.

One customer from Texas went so far as to post on Twitch’s Facebook page “If users want to use a bad password, that’s their problem, not yours.”

This customer backlash against a business that is simply trying to protect its customers from security breaches raises an important question – who exactly is responsible for Cybersecurity? Is it the government’s responsibility in the laws, policies and guidelines it creates? Are businesses in the private sector, which take our credit card and personal details and store them, to be held accountable for both internal breaches and external attacks? Or is it down to us, the consumer, to choose our passwords wisely and keep our information safe? The truth is that for a security policy to be successful, everyone involved at each stage of an online transaction has to take a certain amount of responsibility and work together to achieve the common goal of protecting society from malicious hackers.

UK Cybersecurity Essentials

In the UK, businesses that want to tender for government projects must adhere to the new baseline Cybersecurity standards created by the Cyber Security Essentials 5 Key Controls, which are:

  • Secure configuration
  • Installing boundary firewalls and Internet gateways
  • Access control and administrative privilege management
  • Patch management
  • Malware protection

Although not mandatory for private sector projects, hundreds of businesses such as Action for Children, Vodafone, SproutIT and ELEXON are getting certified to show they are taking Cybersecurity seriously. So here we see the public and private sectors working together and taking responsibility to ensure both their own Cybersecurity, and ours.  As Cabinet Office Minister Francis Maude observed in a recent ComputerWeekly interview:

“While it’s right the government leads by example, we can’t do it alone. There’s no single magic bullet to neutralise the cyber threat, but the one thing common to all our efforts – whether it’s about resilience, or awareness, or capability and skills – is co-operation.

The government is now using funds from the National Cyber Security Programme to create Gov.uk Verify, which enables purely digital proof of identification with a decentralised data storage system, and is to be rolled out in the public sector with the hope that the private sector will quickly cooperate and follow suit. Mind you, the PwC 2014 Information Security Breaches Survey also found that “70% of organisations keep their worst security incident under wraps. So what’s in the news is just the tip of the iceberg” so the private sector still has some way to go before the majority of businesses can claim to be Cyber Secure and operating transparently.

Barack Obama Fights Back

As the Wall Street Journal reported a couple of weeks ago, U.S. regulators are deeming the corporate boards ultimately responsible for successful cybersecurity strategies, and even suggesting that individual directors and security officers should be held accountable and liable in the event of a breach. In January of this year, after Islamic militants allegedly hacked the U.S. Central Command Twitter and YouTube accounts, President Obama defended proposed new legislation creating a new level of corporate responsibility by saying:

“When these cyber-criminals start racking up charges on your card, it can destroy your credit rating. It can turn your life upside down. It may take you months to get your finances back in order…so this is a direct threat to the economic security of American families, and we’ve got to stop it.”

Stick to your Cybersecurity Guns!

This is all very well, but what do you do when your customers don’t want to play ball and your CSO’s job and your company’s reputation are at risk? Where your business is legally accountable for Cybersecurity and breaches put the business and individual board members at risk, it’s a good idea to dedicate a section on your website to informing your customer of the legal requirements and penalties and how the law is designed to protect them. A section explaining how your security strategy is designed to benefit them and the risks associated with a breach is also a good way of educating consumers. Similarly responding to live feedback on social media with a brief explanation of the law, the risks and the repercussions could be helpful.

Ultimately businesses need to stick to their guns and not bow down to customer complaints about increased security measures. They have a duty to all of their other customers and the nation as a whole to help stamp out attacks and breaches, and the only way to do this is for the public sector, private companies and individual consumers to collaborate and take joint responsibility. After all these customers who are so vocal about the downsides of higher security will also be the first to complain about your business if your security systems fail and cybercriminals hack into their bank account and start spending their money!

Barry Weston AUTHOR: Barry Weston
Barry is Sogeti's Solutions Director for Transformation, and winner of 'Testing Innovator of the Year’ at the 2013 European Software Testing Awards.

Posted in: A testers viewpoint, Behaviour Driven Development, Business Intelligence, communication, Developers, e-Commerce, privacy, Risk, Risk-based testing, Security, Technical Testing, User Experience      
Comments: 0
Tags: , , , , , , , , ,

 

Walking out of the Certified Agile Tester examination straight back into a waterfall world, shattered yet full of knowledge, we sat in the nearby pub and one question crossed our minds: when would we use all this new found agile knowledge? There would need to be a top down emphasis on becoming agile, as well as buy in from our development partners. It seemed a million miles away…

It was then that it struck us; the agile methodology is as much a mind-set and characteristic change as much as a stringent change of methodology. The emphasis of which being on soft skills, trust and the appreciation of people. Last time I checked, aspects of character didn’t need a formal sign off to implement.

Going home that evening (granted after a few beers) it was like being in an agile world. Chores were reflected as user stories; each an element in imaginary task board, awarded user story points and prioritised on the addition of value. Friday dinner sitting around the table was the weekly retrospective, talking to my parents and sibling about what went well, what didn’t go so well, what would you do differently next time? Ignoring the funny looks, I went to bed content, ready to head back to the office on Monday morning and begin creating an agile environment, if not in structure but in actions.

Monday morning arrived; with a spring in my step I entered the office ready to unleash my agile actions. However, in the changing of character and human actions was much easier and realistic in theory than the reality. Yes I was part of a team, but not a team in an agile sense of the word, consisting of developer, testers, developers BA and product owners. The actuality was that I was returning to my testing compatriots ready to test a piece of software and to find and send defects over the proverbial fence, only for the development compatriots to try and throw them back at us! (Or playing defect tennis as I like to call it)

George Bernard Shaw once famously said “Progress is impossible without change, and those who cannot change their minds cannot change anything.” Would being more agile actually mean progress?  Would it ever be an acceptable methodology for the people stuck in their ways? Was it too much of a drastic change? These questions bugged me.

Contemplating in more detail, I realised that the waterfall and agile approaches were like chalk and cheese and there had to be a happy middle or hybrid which could be utilised as an introduction whilst the team got used to more drastic agile principles. Surely the project could dip their toes into the agile world without upsetting the apple cart.

The concept of a being hybrid was something that provided considerable food for thought. We could arrange daily stand up meetings between testers and developers and encourage more communication, have weekly retrospectives to discuss our thoughts, or even get up and discuss defects face to face (shock, horror!) On the surface these are all normal agile procedures but they are “bite sized” and easily introduced within a waterfall approach.

These small introductions would begin to change the ‘you and them’ mentality, crossing the first hurdle of agile implementation. More and more agile actions could then be incrementally introduced from release to release as the team got into a norming phase. This incremental approach could potentially eventually lead to full agile delivery!

I personally believe that, once changed, human actions and characteristics are the most powerful catalyst to fundamental progress. Even though the full agile implementation would still be a million miles away, at least some stepping stones could be put into place to works towards it.

Only time will tell if the Software Delivery world runs with agile for the long term but what will be equally interesting is to see if the concept of being more hybrid is something that also takes off in parallel…

AUTHOR: Ketan Angris

Posted in: Agile, communication, Developers, Human Interaction Testing, Innovation, Sogeti Studio, Technology Outlook, waterfall      
Comments: 0
Tags: , , , , , , , , , , ,

 

Google implements new algorithm updates almost 500-600 times per year, some more major than others. In the past, major revisions such as Google Panda and Google Penguin have had significant impacts on search results; partly due to mobile usability. Because of this, organisations must be conscious of the impact of these algorithm updates when designing a website in order to avoid a drop in Google search results. A website ranked number 1 or number 2 in a search query could fall to ninth or 10th place, causing a loss of revenue in potential business.

Research from TechCrunch found that 44% of websites belonging to Fortune 500 companies failed the mobile friendly test! There’s a big category of people who have not taken the mobile usability aspect of their website into consideration, instead focusing on PC and Desktop platforms.

To make a website mobile friendly, there are a number of things to consider, including:

  • Ranking: Since the end of April 2015, mobile friendliness has been a factor Google’s ranking algorithm. This website can help you to find out if your site is mobile friendly (or not): https://www.google.com/webmasters/tools/mobile-friendly/. Companies spend significant amounts of money and effort on Search Engine Optimisation (SEO) to appear high up in Google rankings, with places on the first page of results highly prized. However, even when using all the keywords and SEO tricks in the world, without a high quality mobile-friendly website they could be penalised and miss out on the top spots.
  • Accessibility: Text that appears readable on computers but not on mobile devices will be one reason why a website falls drastically in rankings. It is important to test your site across a range of available platforms to ensure consistency in its appearance. Developers and Testers should ensure that text is readable without tapping or zooming, tap targets are spaced appropriately, and the page avoids unplayable content or horizontal scrolling.
  • Verification of pages for mobile devices: Accessibility and ranking alone may not guarantee a mobile-friendly site that passes the test. The reason could be that Googlebot for smartphones is blocked from crawling resources, like CSS and JavaScript, which are critical for determining whether the page is legible and usable on a mobile device. You need to make sure that these can be verified.
  • Use of responsive themes: If a page is found not to be mobile-friendly, one issue could be the responsive themes you are using. Try changing the theme and the layout slightly so that, when someone visits the site via tablet or mobile phone, the contents like the site title, post titles, and post content can be read on smaller screens without scrolling etc.

If you would prefer not to switch to a responsive theme, you can enable an option that will show a mobile-friendly, responsive theme to mobile visitors only – ensuring desktop and laptop visitors see the same site they have always seen.

If you’ve tried the above and are still having issues, or you just want some help along the way, contact Sogeti Studio (our UK-based web and mobile test lab) at enquiries.uk@sogeti.com or 020 7014 8900.

AUTHOR: Oluwatosin Oguntade

Posted in: Behaviour Driven Development, Big data, Business Intelligence, communication, Developers, Digital strategy, mobile applications, mobile testing, Opinion, Research      
Comments: 0
Tags: , , , , , , , ,