reflexivity1The Evil Queen asked: “Magic mirror in my hand, who is the fairest in the land?” And the Magic Mirror answered: “My Queen, you are the fairest here so true. But Snow White beyond the mountains at the seven Dwarfs is a thousand times more beautiful than you.” We know the rest of the story. In this last sentence from Snow White, the Magic Mirror acted as parasitic third party, which, although it always said the truth, amplified the Evil Queen’s self-love, making properties like pride and vanity emerge. Moreover, the Magic Mirror added information that a normal reflection would not have given, saying: “(…) Snow White beyond the mountains at the seven Dwarfs is a thousand times more beautiful than you.” He gave information that should not have been known in a real world (of course, it’s a fairy tale!), getting an out of hand reaction from and by the Evil Queen, driving then, the consequences we know.

What I want to show in this third and last article about complexity of socio-technical systems (see part 1 and part 2) is that when the reflexivity is misunderstood or neglected, the world becomes distorted and dysfunctions. As a corollary, when you can allow some reflexivity in your socio-technical system, you can better control it.

What is said about reflexivity?

In an anthropological or sociological point of view: “reflexivity is considered to occur when the observations or actions of observers in the social system affect the very situations they are observing, or theory being formulated is disseminated to and affects the behavior of the individuals or systems the theory is meant to be objectively modeling.” (Wikipedia). For instance, when an ergonomist works on a human interface, what he or she wants to create reflects the way of how he or she would like to use it inside the ecosystem of use. He or she creates interfaces that reflect their human side. Human beings are reflexive! More globally, the society can be seen as reflexive system, where, for instance, fashion or rumours are visible outward signs.

In computer science, reflexivity means two “dual”[1] things:

- Structural reflexivity: this consists of reifying program code and all abstract types this program deals with. In the first case, reification of a program code allows for handling this program during execution. It is, therefore, possible to maintain a program while it is running. In the second case, reification of abstract types allows the program to inspect and modify the structure of complex types.

- Behavioral reflexivity: this concerns, in particular, the execution of the program and its ecosystem. By this type of reflexivity, a program can modify the way it is executed by modifying the data structures of the interpreter/compilator. Therefore, the program can have information in its implementation, or even self-organizes to better fit an ecosystem.

How it works?

reflexivity2Smith (1982)[2] and Maes (1987)[3] said it is the ability for a system to inspect and modify its internal structure (structural reflexivity) or its own run (behavioral execution) while running. To resume: it can “reason” and act by itself. According to the previous definitions (structural and behavioral), I made the opposite schema to illustrate, how reflexivity works.

In his PhD thesis, J. Labéjof[4] (2012) proposed to take up two challenges: 1st is where to apply reflexivity in a System of Systems (SoS), which supposes to introduce a certain degree of variability, therefore a dynamic dimension, which must not be in contradiction (non intrusive property) with the system requirements; 2nd is how to apply reflexivity? Reflexivity will be applied by means of reflexive models, which will be used as a meta-layer of the system components. In the above figure, variability of the structure is in the wings flaps, the rudder and the elevator. Fortunately, the dynamic of the plane is designed to support this! Still referring to the example in the figure, the reflexive models are in the pilot’s brain, which, most of the time are composed of sub-models, for a human one can say its experience or expertise on a domain. For a non-human system, which is not supposed to benefit from years of learning, Labéjof proposed to model the components of the subsystems (for instance the flaps in our figure) into the reflexive model of the global system (the SoS composed of the plane and its sub-system ((including the pilot), of which the new goal is to climb). Be careful here, this last one is not a metamodel; it’s a model that contains models of the sub-systems. To be non-intrusive, the global model will have a dependence graph where the subsystems embedded to reach the goal will depend on the meta-layer, without knowing necessarily the state of this meta-layer. If it’s necessary, then a technique called dependency injection can be used to dynamically create these dependencies between the sub-systems and the meta-layer. Labéjof called his contribution “R-*”, which gives a theoretical answer to the two challenges above. R-* can also operate on the ecosystem and the specification, as the components of the subsystems and its communication with other subsystems. To implement his proof of concept, Labéjof used a Framework called FraSCAti, which is based on Service Component Architecture (SCA) and Fractal another framework that implement reflexivity. Fractal and FraSCAti started at Inria and France Telecom; with early ideas back in 2000, they both are now supported by the OW2 open source consortium. I invite you to have a look at http://fractal.ow2.org/ and http://wiki.ow2.org/frascati/Wiki.jsp?page=FraSCAti

Then, what to do with reflexivity?

I will say only our imagination will be the limit! For instance, in the behavior of a swarm of drones on a battle field or in case of a disasters, or to build more reliable fault tolerant systems. I will give you a very scary one! We live in a socio-technical world, with screens everywhere, with sensors everywhere, and we leave traces everywhere; the Magic Mirror is for tomorrow. The question is who will have it first: Snow White or the Evil Queen?


[1] The more often they are linked together

[2] B. C. Smith, Procedural Reflection in Programming Languages, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, PhD Thesis, 1982.

[3] P. Maes, Computational Reflection, Ph.D. Thesis, V.U.B, Brussels, 1987.

[4] J. Labéjof, R-*, Reflexivity serving the Evolution of System of Systems, PhD Thesis at UST Lille, 2012 (Original title: « R-*, Réflexivité au service de l’Evolution des Systèmes de Systèmes »)

Patrick Marquet AUTHOR: Patrick Marquet
Patrick Marquet is a practice leader for Sogeti since 2007 and began is career as a developer for the consulting industry in the early 90’s. He is specialized in project management, consulting in enterprise architecture & technology, consulting in organization & process of IT service center, and team management.

Posted in: Enterprise Architecture, Socio-technical systems, SogetiLabs      
Comments: 0
Tags: , , , , , , , , , , , , , ,

 

Image and video hosting by TinyPicFor several years now, I have been teaching a course named getting started with enterprise architecture. It is a three-day course meant to assist practitioners in making a start with implementing an architectural practice within their organization. It is great fun to give this course and evaluations indicate that the participants also benefit from it. Which is very fortunate, of course.

One of the lessons I try to teach in my classes is that architecture cannot be the sole responsibility of the architects. Architecture relates to decision making throughout the organization. Of course being an architect entails having particular skills and knowledge. But that does not mean the rest of the organization does not play a role in making architecture effective. The navigation system in your car does not relieve you from the task of steering the car in the right direction.

Until recently my classes consisted of architects that have a little knowledge of and experience in architecture, and are searching for ways to further implement architectural thinking and acting. Lately, however, I noticed a change in participants. My classes no longer consist of architects, but increasingly of practitioners holding functions such as information analyst, developer, test manager, project manager or team manager. This change happened gradually. Asked why they attend a course on enterprise architecture, they give two kinds of answers. Either they increasingly encounter architectural prescriptions in their daily work or they are expected to play their part in architecture development.

It stems me hopeful. Is it a sign that maybe we are finally making the shift from enterprise architecture being a function within an organization to enterprise architecture being a competency of the organization? With architects who possess the knowhow to translate strategy to structure and an organization that translates and applies this knowhow to its daily routine because it helps in making better decisions?

Marlies van Steenbergen AUTHOR: Marlies van Steenbergen
Marlies van Steenbergen started her career with Sogeti Netherlands in the role of service manager enterprise architecture in 2000. After working as a consultant for a few years, she became Principal Consultant Enterprise Architecture in 2004. In this role, she is responsible for stimulating and guaranteeing the development of the architectural competence of Sogeti Netherlands. Since 2012 she is the main proponent of enterprise architecture and DYA within Sogeti Netherlands.

Posted in: Enterprise Architecture, SogetiLabs, Training      
Comments: 0
Tags: , , , , , , , ,

 

I was recently asked to go into a large financial institution to do the technical project management of an upgrade of TestDirector 8 to Quality Center 11.52. It involved hundreds of projects and hundreds of users. As most readers will know, this equals a massive upgrade spanning several version releases; multiple quantal leaps in effect. Here’s how we approached this:

Naturally, the upgrade couldn’t be carried out in one day on a weekend using the same machine that had hosted TD8. Wherein, we would have put a disc into the TD8 server, clicked the ‘Upgrade’ button and a few hours later TD8 would have magically become ALM11.52. Unfortunately, it was much more complex than a desktop application upgrade.  

There had been a few previous attempts to do the upgrade before, internally. But, during all of these attempts, the client had encountered issues. This is why they decided it was time to call in the experts from Sogeti.

What was called for, primarily, was emotionally intelligent leadership. Aside from the normal stakeholder management, I needed to work closely with the Architects and contacts within HP to come up with the technical specification of the staging environment in which we would upgrade the large number of TD8 projects to ALM11.52. This is what we decided on:

It was decided to go from QC9 to QC10 instead of QC11, in order to delay the migration of the repository to the new SMART Repository in ALM11.52. Twelve new servers had to be built – seven of them VM’s, all of them re-deployable.

The next thing I needed to do was to get high level estimates from the Server Team for all of the components. Bearing in mind I was implementing a Disaster Recover strategy too. I then went to the Project Management Office (PMO) to assist me with the creation of a Project Initiation Document (P.I.D.) and project governance.

A work track was then created to monitor the effort put into the projects by the various areas of the business, along with infrastructure costs. These costs were being treated as Capex, coming out of the small budget the Project Sponsor had for the upgrade project. The Consultancy costs, which included my time on the project, had to come out of Opex, to ensure there weren’t overruns on the budget. These costs were raised through PO’s and were recorded in the Work track, as ‘Fixed Costs not Billable’ to the project.

I worked with Architects, Database Analysts and IS Delivery to ensure the staging environment was delivered. This environment included seven new virtual servers and five new physical servers, which took six months to procure and implement. Full DR capability was also achieved in this time for pre-production and production environments.

The industrialisation of the upgrade/migration process took nine months from beginning to end. When we finished it was robust, resilient and efficient, with fail-over capability. Active Directory (LDAP) Authentication capability was also implemented, although the client wished to stay with Quality Center authentication for the time being. Many TestDirector 8 projects and Users were upgraded to ALM11.52, deeming the project a success.

This success was only possible because of good teamwork and empathy for those on the team, both in-house and supplier-side. Empathy was more important than IQ on this occasion, as in most.

To find out more about Sogeti’s Software Testing consultancy services, including tool selection and implementation, please visit our website.

Robert Boone AUTHOR: Robert Boone
A hands-on, pragmatic ISEB certified Test Automation Manager with over ten years of automated test tool and automation development/testing experience.

Posted in: HP, Infrastructure, project management, Software testing, Working in software testing      
Comments: 0
Tags: , , , , , , , , , , , , , , , ,

 

Complexity1This series of blog posts is entitled “Managing new complexity, ”but why is it new? With the “Internet of Things,” devices everywhere with “software everywhere”, and people in the middle, means we have had to face new behaviors, including an explosion of interactions and an emergence of properties, without theoretical foundations. Humanity has known this kind of challenge already, for instance from the first steam machine (Heron of Alexandria 1st century A.D.) to Carnot and the first laws/theories on thermodynamic in the 19th century, so many questions!

Socio-technical systems can be, for instance, an enterprise, a department in a public administration, a community in a social network, or also a city which offers more and more interactions with its citizen via mobile applications for instance. With my laptop right now I constitute a socio-technical system. Let us focus on this. On a System of Systems (SoS) point of view I am a biological-ethical-system, my computer an IT system. We both have goals; right now mine is to write this article in interaction with a word processor, a sub system of my computer, and my word processor’s goal is to offer me word processing and sometimes (!) correct my spelling mistakes. The system composed[1] by my computer and me has a main goal; in this context it is a written article with no spelling mistakes. What about other properties of such a SoS? For instance, what is its spatial boundary or its time boundary? When I leave my chair to have a coffee, and I’m still thinking about this article, which preserves the goal of the SoS, how may I characterize this property? When I am writing, and suddenly a thought which has nothing to do with the article crosses my mind (a sub-system of me) interrupting the main goal, while my fingers (with a certain degree of autonomy) are still achieving the main goal, how can I characterize this property? How can I represent it, model it, design it, if I want to simulate it? In fact, there is not a clear and shared answer in the various communities working on SoS. In my reading about SoS and how to characterize and define them (problematic #1), I found a PhD thesis, “Proposal for a Product Centric Multi-Scales Modeling Framework of an Enterprise Information System”[2] written by J.P. Auzelle, where one can find a synthetic summary of what the main characteristics of SoS shared by the different communities who work on the subject are. The figure below is adapted to it.

Complexity2

-Ok, here is the beginning of a shared characterization and definition of SoS. But are there any theoretical foundations of SoS (Problematic #2)? Whilst SoS is a reality, what should one base one’s reflections on to improve SoS design and operation? The best approach, right now, is to ask your neighbor, think by analogy (sometimes it works!), find best practices in your field of operations, try and … pray! The main cause of this lack of a unified theory about SoS is due to the multidisciplinary nature of SoS. For instance, interoperability which is a fundamental concern of SoS must be designed and operated at various levels. The European Interoperability Framework[3] has four levels: legal, organizational, semantic and technical. Then, you need many types of expertise to address these four levels: lawyers, sociologists, psychologists, engineers, (…), plus climatologists, agronomists, physicists, etc. if you are working on a SoS in the fields of agriculture or a supervision system based on satellites. Furthermore, as you can see in the above figure, several authors have identified characterizations of SoS and have tried to define them, but still, there is no set of agreed principles for SoS, and there is not an established conceptualization for SoS. For this last one, many people involved in SoS think we need a new way of thinking. Reductionism, based on a common way to analyzing things or situations, is not enough and not even appropriate. For instance, if you search the optimum of a system, be sure it will not be the sum of the optima of the sub-systems which compose it. A holistic approach tends to be one, as they proceed in the cybernetics communities, for instance.

SoS sometimes has a behavior that seems a bit bizarre, at least unexpected, as it relates to emergence (Problematic #3). This fundamental property of SoS may come from changes in the ecosystem in which the SoS is immersed in, or changes to individual systems and/or changes to their interactions. Emergence can be beneficial or detrimental; in both cases it is mostly unpredictable, meaning you cannot design it. At the very least, it is difficult to operate. As there is no theoretical foundation for SoS, there is no fundamental models for SoS to provide a scientific understanding of how and why emergence occurs. Note that in the case the emergence is beneficial, it could be interesting to exploit this situation; at present you can detect it, but sometimes it’s too late.

What about the “damned human factor” in all that? The human aspects (Problematic #4) are usually taken into account as a high-risk in security policies, although the higher levels of interoperability are concerned with human aspects, either at the individual level or organizational level. The human aspects are not seen as an integral component of the SoS, with, for instance, statistical, fuzzy logic, or stereotyped behaviors or interactions with other sub-systems.

Last summer, an active SoS community, The T-AREA-SOS[4] (Trans-Atlantic Research and Education Agenda in Systems of Systems), had proposed an agenda for 2020 composed of twelve themes to work on to better understand, control and command SoS. I gave you a summary of the first four ones. The others are as follows:

- Multi-level Modeling (Pb #5): need of a meta-modeling approach that joins the need of theoretical foundations.

- Measurements and Metrics (Pb #6): what to measure and when is still not yet complete and consistent enough due to the independence of management and operation of the components.

- Evaluation of SoS (Pb #7): even if there are many things to take from the software engineering discipline, for instance SPEM (Software & Systems Process Engineering Metamodel), for highly reconfigurable SoS and those which involved more physical and social aspects, further works are needed.

- Definition and Evolution of SoS Architecture (Pb #8): architecture frameworks like Zachman, IAF, and more recently Togaf, with their high level and holistic approach of enterprise architecture are good candidates for future research that “should address dynamic architectures, partitioning, and the explicit means through which architecture informs the decision making process”.

- Prototyping SoS (Pb #9): due to the lack of knowledge about emergent behavior which may not be observable until the SoS is complete.

- Trade-off (Pb #10): know all the possible trades between SoS and its ecosystem and inside the SoS is difficult and even impossible due to the lack of characterization and definition of SoS, the dynamic nature of the SoS and its ecosystem.

- Security (Pb #11): Interoperations are needed, but are also breaches for security. This problem area needs research to develop the “means to ensure that the physical, ethical, and cyber aspects of safety and security of the SoS, its people, its end-users, and its social and physical environnement are properly addressed”.

- Energy-efficient SoS (Pb #12): all SoS need energy, “there is a need to operate them in such a way as to minimize the detrimental effects on” their ecosystems.

In Toulouse, Sogeti works on an ambitious innovative program that tries to include new visions, methods and techniques in the field of SoS WE call it “Cockpit for Big Systems (CBS),” a centralized and agile solution to control and command ultra large scale systems (scale of 1 million assets (CPU, GPU, RAM, HD, printer, License, …) in organizations with scale of 100 000 employees. Any inputs are welcome!

Next week I will talk about the potential of Reflexivity in our context of SoS.


[1] Is it a composition or an aggregation? Will see later, the answer to this question is not so easy.

[2] Original title « Proposition d’un cadre de modélisation multi-échelles d’un Système d’Information en entreprise centré sur le produit », Jean-Philippe Auzelle, PhD thesis from H. Poincaré University of Nancy, 2009.

Patrick Marquet AUTHOR: Patrick Marquet
Patrick Marquet is a practice leader for Sogeti since 2007 and began is career as a developer for the consulting industry in the early 90’s. He is specialized in project management, consulting in enterprise architecture & technology, consulting in organization & process of IT service center, and team management.

Posted in: Internet of Things, mobile applications, Open Innovation, Research, Security, Socio-technical systems, SogetiLabs      
Comments: 0
Tags: , , , , , , , , , , , , ,

 

Scrum-master-tatoo1. ScrumMaster role without an Agile mindset.

The role of a scrummaster is a very important one. Then why is it that this role is still given to people without an Agile mindset and/or experience?  The project manager often gets picked for this role. This choice is in itself an obvious one for organisations that are still transforming from a more traditional way of working. The role of a scrummaster is a managing one, so a project manager is the obvious choice, no ? Wrong! Project managers want to control how the work is done, when the work is done, and what kind of work is done within a certain timeframe. You can’t blame them; over the years he is held accountable for the outcome of the project and, therefore, he wants to manage the team. Now in scrum the whole team is responsible for the outcomes and is self-managing.

So don’t randomly pick a Scrummaster; if possible let the team be a part of the selection procedure. Don’t  go for the obvious project manager pick. Make sure that the person is a servant leader who knows what Agile and Scrum means. As a result he can coach and teach others the way of scrum within and outside the team.  If you can’t find any, send people who fit in the description and have an Agile mindset, to a Scrummaster training. I did this as well; it really gives you new insights and, besides that, is fun to attend to.

2. Scope is Fixed.

Wrong! Scope is not fixed. What a lot of people forget is that the big difference between scrum  and traditional ways of working is the way they approach the devil‘s triangle. In scrum the scope can change. Traditionally, scope is seen as fixed. The time, cost and quality are accustomed to change. So make the team aware that Scope is flexible whereas time, costs and quality aren’t.

3. Product owners can’t make decisions alone

“Hi, I am the product owner.  I got great ideas on how to improve this application and have innovative new ideas that really add value, yet I have to constantly ask my superior or his manager if they have time to check that they feel the same way” is not what you want to hear as a Scrumteam.  The risk is that the PO will become the bottleneck for the team.  This will result in a team that will lag behind in creating new value for your organisation just because of the fact they have to wait way too long for any decisions to be made.

Make sure that people outside the team don’t make the decisions for the team. In Scrum they aren’t responsible for what is being delivered, the team is. The product owner is part of that, and he should be given the needed mandate and the responsibility he deserves.

4.  Team isn’t a team
Then why do I stumble upon dedicated teams within Scrumteams? One for the documentation, one for coding, and one for testing. And they work like this 1st Sprint is about creating the functional documentation, 2nd Sprint is about realizing the documented functionality, and the 3rd Sprint is about testing the documented functionality. Or they put the 3 phases in 1 sprint. By this, testing will be done less adequately because of the time shortage you will create. And does this sound familiar ? It should because it’s waterfall put in the Scrumframework.

Teams should be multidisciplinary teams and work as one. Scrum is about teamwork, collaboration,transparency,keeping it simple, etc. So, don’t create teams within the Scrumteam.

5. Scrum team is too large
The bigger the team the better, right? You can work on more stuff to be done and in less time. Wrong! If a team gets bigger then 9 people it’s harder to communicate, collaborate, and the team loses its dynamics and this will result in less kaizen moments.

So that’s why you need competent team players who like working in multidisciplinary teams  and don’t mind working simultaneously on new functionality. The size of the teams should be around 7 plus or minus 2.

Overall:

Try to follow the simple rules scrum describes (https://www.scrum.org/Scrum-Guide), there aren’t many. Keep aligned with the agile-manifesto. Make sure you have an experienced Scrummaster who knows how scrum works and spots the signs of “waterscrum”. Get a Product Owner that can handle the responsibility and has a mandate so he can make decisions. Create a multidisciplinary team.

Jos Punter AUTHOR: Jos Punter
Jos Punter is an Agile Tests expert who is enthusiastic and doesn’t like status quo. Always looking for a way to improve, looking for a better or an other way, innovate, change, adapt and get better. Jos loves new technology and gadgets which supports him in achieving his goals.

Posted in: Agile, Collaboration, project management, Scrum, SogetiLabs, waterfall      
Comments: 0
Tags: , , , , , , , , , , , ,

 

wedding-rings-wallpaper1From working on several innovative projects, I am convinced that the innovation process benefits from ecosystems that mix up ways of thinking and facilitates an open-minded mindset. The question is how to create  momentum among organizations that have different challenges, DNA and culture.

In 2010, we started to think about it and tried to identify some connections with some research laboratories. However, the research sector in France is often affiliated with Universities that don’t have direct access to the industry or the market. Due to this fact, a lot of ideas and innovations are not promoted and deployed.

What a delightful challenge to connect these worlds so far away from each other. And you know what? It’s working! I have the pleasure to introduce to you the innovative wedding in Bordeaux. The wedding occurred in the middle of 2010 by the signature of a partnership agreement between Sogeti and the Laboratoire Bordelais de Recherche en Informatique (LaBRI) . The aim of this cooperation plan was to create a common basis of R&D, leading to the development of educational programs relating to new technologies. With this partnership, Sogeti has the opportunity to facilitate access to new models and algorithms for innovative research projects. The benefit, for the LaBRI, is to work with an IT services company in order to implement research work in the real world and to package industrial solutions.

About the “children”

Several projects are currently underway in very diverse areas of technology.

LaBRI1For example the “Multibox” project heads towards the deployment of a “Media Ecosystem” by proposing a comprehensive architecture with innovations and evolutions for the common actors of the Networked Media Value Chain : the Content Providers, the Service Providers, the Network Providers, and of course, the End-Users. The project challenge is to propose an open and modular architecture to support the easy creation and deployment of such a networked “Media Ecosystem”.

Different innovations have been driven by this project: context-aware Framework for Media Services Provisioning, user profile, home-Box-Assisted Content Distribution, management System for Virtual Multi-domain Multi-provider Content Aware Networks…

In this example project, LaBRI is the project owner, and Sogeti’s team has made various developments to skills primarily on mobile.

Octobre bleu” is an another way to work. This project, which is being launched, is piloted by the operational team from Sogeti. The objective is to package application components for video and sound analysis. Different business cases are concerned, from the monitoring of public areas (trouble comportment detection) to the observation of animals during migration periods.

In this second project, the LaBRI presents expertise regarding the treatment of video data. Two more partners were onboarded on the project, and one is a local customer, and the second is an association. Another new way to expand the ecosystem!

LaBRI2

LaBRI3Plate-forme mutualisée” is the last project being launched. The objective is to build a shared platform providing Big Data services. This platform will give the opportunity to project teams to have an experimental environment, on the cloud.

In conclusion, this is a win/win collaboration that will create and enrich an ecosystem in order to bring added value to the market. And…

“we congratulate the married and wish them to live happily ever after with a lot of interesting projects in the future!”

AUTHOR: Dominique Colonna
Incorporating the Group in 1997, Dominique has specialized in the development of new technology projects on architectures of components or third party software. He participated in the CMM-I certification of an accelerated development center for this kind of solutions.

Posted in: Big data, Cloud, Collaboration, Open Innovation, Research, Technology Outlook      
Comments: 0
Tags: , , , , , , , , , , , , , , , , , ,

 

evolutionScrum means the end for testers who do not want to change. In order to survive, the tester must adapt to the changing development environments.

With the deployment of an Agile development approach, such as the frequently used Scrum, each team member has a testing role.  Designers, programmers, managers and users are in fact all testers as well.  Therefore, it is plausible to suggest that testers are unnecessary. Does this seem short-sighted? Recently, an IT manager of a large organization suggested that the test department could be halved since they are working with Scrum. However, this IT manager should know that you can not expect all team members to suddenly have the same extensive testing skills as the experienced tester.  What if it is the other way around: a Scrum team with testers only? Would they all ‘suddenly’ be able to design and program? Probably not! Adapting to the changed circumstances will take time from all team members.

In practice it means that in a Scrum approach testers should help their team members to fulfill their roles as testers as well as possible. For example, they have to provide assistance in the preparation of high-quality user stories. The tester could teach the designer and user how to apply test evaluation techniques (e.g. INVEST model) or help the programmer with the preparation of unit tests and the user with the acceptance tests. Besides this, the tester should moderate the product risk analysis, which could be seen as one of the most important activities in a Scrum project. Because with the outcome of this analysis, the team will be able to find the balance between the investment in time (and money) on the one hand, and the risks covered on the other.

In this way and in order to survive, the test professional moves toward a catalyst for quality improvement!

Leo van der Aalst AUTHOR: Leo van der Aalst
Leo van der Aalst has more than 25 years of testing experience. He is experienced in international – consultancy - projects and developed amongst others services for agile testing and the implementation of test organisations.

Posted in: Agile, Scrum, Software Development, SogetiLabs, User Experience      
Comments: 0
Tags: , , , , , , , , ,

 

HammerI love to quote Abraham Maslow “If the only tool you have is a hammer, you treat everything as if it were a nail”.   But what if that hammer is a $1,000,000 software package?  Generally speaking, IT consultants need more tool diversity.  Too many are overly reliant on a single platform, language or vendor.  Consultants stop exploring, and tend to view innovation as the incremental changes that come with new releases of our favorite hammer.  This is especially true when consultants align with a single vendor.

I have personally experienced the hammer behavior several times over my career but never more clearly than when working for a wheel manufacturer in Detroit.  The client wanted a website to share product information or “brochureware”; a static website.  There was no need for commerce, back-office integration or complex functionality.

I presented these needs to a team of experienced IT professionals and they responded with a quote of $250,000 using Microsoft SharePoint running on Azure. The response was shocking to me.  What the client was really asking for was GoDaddy and WordPress.  A total projected cost should have been less than $10,000.  Which do you think the client picked?

There are IT roles where becoming the technical expert on a specific tool is required.  The IT industry will pay handsomely for deep, tool specific skills.  They will also reward equally well to provide vision and strategy, independent of tools.  To fulfill the later, you need diversity across the IT landscape.

Consultants advising business on IT strategy must keep an open mind to possibilities beyond their trusty hammer.  Business managers looking for a solution need to find the right tool/software, for the right job, at the right price and make sure the advisor is doing the same.

Robert LeRoy AUTHOR: Robert LeRoy
Bob LeRoy is Vice-President of Application Development in New Technology for Sogeti USA. He has led multi-million dollar projects, managed major partner relationships and organised go-to-market strategy for national service offerings.

Posted in: IT strategy, Software Development, SogetiLabs      
Comments: 0
Tags: , , , , , , , ,

 

ManToVocalMachine1In the mid-2000s, a new set of touch-based devices arrived on the market. Since then, interfaces have had to adapt to improve the man-to-machine communication. Tactile represents the typing (man-to-machine), while the screen remains the machine’s main source of response.

Sci-Fi movies – an important source of inspiration for innovators! – have already integrated the possibility to dialogue with the machine. The idea is not new but requires implementing a series of innovation blocks.

But what is the vocal interface made of? Mainly two components:

- Speech recognition (Human-to-machine)
- Text-to-speech (Machine-to-human)

Then again, the idea is not new, and the technologies that are required to develop such solutions have been in place for about 20 years now.

Text-to-speech is quite easy to implement as it merely consists in transcribing words into sound waves. This technology is already used in different areas like road navigation, PBX telephony, personal assistance – e.g., Siri, etc. Possible innovations here are connected to new data management algorithms that would facilitate the diction and make the pronunciation more natural.

On the other hand, speech recognition is slightly more complicated to implement. Technology has to deal with the heterogeneity of human diction due to the diversity of languages and accents within those languages. The existing two systems are:

- Learning-based speech recognition (mono-speaker model): The machine learns the user’s pronunciation over the time. This requires a certain phase of adjustment but allows a clear recognition of the words that are being spoken without concern for the grammar. Application domain: vocal dictation, personal assistant.

- Grammar-based speech recognition (multi-speakers model): no learning phase, the system can be used immediately, but the machine expects a certain wording order, with regards to a grammatical scheme. Application domain: telephony, personal assistant

But then why aren’t these tools more generally widespread? Because it depends on three factors:

1. The first one is the technology availability. It must be standardized and sharable in order for software to integrate it. It is already the case for Sun (today Oracle) which brought some norms and technologies (JSpeech Grammar Format, JSApi via JSR 113 for Java environments, Voice XML, etc.) in the 2000s. Besides, some open source solutions have democratized these technologies such as FreeTTS for text-to-speech and CMU Sphinx for speech recognition (see this 60-line program available on Github: https://github.com/lcotonea/BaraGwuin.)

2. The second one is the usage. Any new man-to-machine interfacing requires adapting the software to integrate the technology and bring real value for the user. This has been on for the last 10 years and has become more and more democratic. All tablets and smartphones are now equipped with such technology.

3. The third factor is the intelligence. Today, speech recognition depends on the grammatical analysis of the sentence that’s being pronounced and is limited to an exhaustive vocabulary (standard VoiceXML for example). In brief, the machine now has ears and a mouth, but it still needs to process the information and find the best possible answer which is more difficult to implement.

MinorityReport3To do so, predictive analysis technologies are currently being developed to compensate for this weakness and eventually create a more interactive and more intelligent machine.

OS mobile manufacturers (Apple, Google, Samsung) and the big robotics producers are already working on this domain and you can already see some sign of it.

Here are two examples:

- A vocal interaction example with the robot Nao, fruit of a collective work between Aldebaran Robotics (who developed Nao) and Nuance (who edited the vocal dictation software Dragon Naturally Speaking): https://www.youtube.com/watch?v=Qw2k40NDCxg#t=99

- Interacting with Nao by Pierre Lison (Department of Informatics, University of Oslo): https://www.youtube.com/watch?v=tJbdyXimYE8.

Source : tv serie « Äkta människor »

There are multiple experiments going on, and there is no doubt the technology will soon be available. Next steps are the standardization and the democratization phases. And finally, the practical applications of the man-to-machine vocal interaction! Probably by the dawn of the next decade.

Sources:

AUTHOR: Loic Cotonea
Loïc Cotonéa has been Architect for Sogeti Group (Sogeti ATC) since 2011. In this role, he is in support for technical presales, technical project delivery, and responsible for customer advising steps. Previously, he served as Software architect and Open source representative (for Rennes location) in Technicolor’s R&D division (Numerical TV) and as CTO in a web startup specialized in travel solutions (B2C).

Posted in: SogetiLabs, Technology Outlook      
Comments: 0
Tags: , , , , , , , , , , , , ,

 

Sogeti Ireland CEO James Govan explains how a robust quality assurance process can lower costs and improve time to market.

With the prolific adoption of digital devices, the fast-paced trend of government to citizens (G2C), business to business (B2B) and business to consumer (B2C) transactions being conducted on mobiles and tablets continues to grow across all industry sectors.

The ease of use and portability of mobile devices, coupled with the power of ‘apps’, has resonated with both organisations and users alike. It is clear that a great digital experience is no longer a nice-to-have, it is a must-have in order to stay in business and meet ever-increasing user expectations for ease of connectivity and multiple channel access.

Increasingly, mobile (m-business) is disrupting existing industries in exactly the same way the introduction of the web (e-business) did around ten years ago. Think of how Hailo is redefining the taxi industry with its simple, elegant app that is revolutionising how both the service providers (taxi drivers) and users (passengers) interact.

The founders of Hailo set out a clearly defined strategy for their m-business model with a focus on delivering a great user experience and quality solution for their end-users – taxi drivers and passengers. In order to disrupt the market, it was not sufficient just to deliver a functionally rich app, it also needed to have proven high quality characteristics such as performance (rapid response times), reliability (always-on), security (payments) and ease of use (usability).

However, many organisations are still grappling with their digital strategy and approach, particularly those organisations with legacy IT systems which can be difficult to change and enhance to enable ‘appification’ and m-business readiness.

What are the challenges facing your business?

In the frenetic rush to get something out to market quickly, project and software lifecycle shortcuts are often taken which can create a poor user first impression across all measures of quality: poor usability, unreliability, slow performance, security threats and low functionality. In the new digital age, users are very unforgiving of low quality solutions.

Print

How can you face these challenges?

For all organisations making the move to digital, Sogeti’s experience is that essential quality assurance (QA) is involved top-down and end-to-end across all business functions and directly relates to a reduced time to market, lower costs and improved quality and performance. The stakeholders in the digital world include users, partners, marketing, operations and IT among others.

There is a battle under way in many organisations as to which function owns the digital strategy – is it the Chief Marketing Officer (CMO), the CIO or the CTO? – but there is no doubt that it is a topic that has the attention of the C-level in many companies. Regardless of functional ownership, having a well-defined cross-functional quality assurance strategy is essential to ensure m-business solutions are deployed first time and that they work and gain user acceptance and traction.

Sogeti’s latest research shows that just over half (55 per cent) of organisations have introduced mobile testing practices in 2013, which is significantly up on last year’s figure (31 per cent).

However, it is a common myth that testing functionality on mobile is sufficient in itself. It is not. Functional testing is only aimed at finding defects and is a small subset of the over-arching QA strategy needed to deliver quality mobile solutions.

It is imperative that organisations devise an appropriate mobile QA strategy that covers the end-to-end lifecycle from development to deployment and considers quality characteristics beyond pure functionality such as usability, performance and security.

What steps can you take?

Sogeti recommends that the following QA disciplines are included with your functional mobile testing to ensure your organisation has a robust and adequate approach in place.

Installation and launch testing: Recent studies show that 60 per cent of mobile users will abandon your app or site if it doesn’t load within three seconds. It is critical that you have a test approach that checks that your app and mobile solutions correctly install and launch on the target deployment platform and devices.

Device compatibility testing: With multiple platforms (phone, tablet), multiple operating systems (iOS, Android, Windows, Blackberry) and communications service providers (carriers) forming part of the deployment eco-system, it can be difficult to test on all combinations. Using emulators offers some benefits but the only true test is to conduct it on real devices and over a real carrier network. A proper automation approach is a must to ensure repeatable and cost-effective testing.

Performance testing and optimisation: There are many testing issues to consider here but the main items are:

• connection speed – check how your app performs in wi-fi, 3G and 4G networks;

• data usage – an increasing problem for both carriers and users (badly designed apps make inefficient use of the carrier’s network resources and reduce performance, and poorly developed apps generate extraneous data for users who then incur higher than necessary usage charges); and

• battery usage – this might surprise to have it listed here but fast battery drain is one of the major issues with badly developed apps and a major cause of user dissatisfaction.

Usability testing: This is critically important to test the user interface and user interaction with the app to ensure a high quality user experience. Mobile users want to start using their new apps quickly. If logging in and navigation is not easy or intuitive, then they are most likely to discard it after first usage.

Security testing: This covers a wide range of issues including data confidentiality, user authentication, user authorisation, data retention and malware vulnerabilities. This is one of the least developed areas of testing within most organisations.

Compliance testing: This can be considered in two areas.

Firstly, legislative compliance. Governments, government agencies and legislators around the world are struggling to keep legislation up to date with the rapid pace of change in mobile apps and m-business. Organisations such as the US Federal Trade Commission (FTC) have documented guidelines on how organisations must comply with advertising standards and restricting mobile app services for children (www.onguardonline.gov/mobileapps). Your QA organisation must be familiar with such legislation and guidelines and have tests in place to validate compliance.

Secondly, app marketplace compliance. The major players (AppStore, Android Marketplace, Windows Marketplace, Blackberry AppWorld) all have checklists that must be complied with before mobile apps can be uploaded for distribution. These checklists are extensive, updated on a very frequent basis, and your QA team needs to keep abreast of the compliancy requirements.

Mobile is the future

The importance of quality assurance cannot be overstated when you are implementing a digital strategy for your organisation. Companies which are leveraging QA leading practice, processes and tools for their mobile apps and m-business solutions are gaining competitive advantage as they are delivering quality solutions to the marketplace.

As the market for mobile solutions continues to grow exponentially, organisations which make real-world testing coverage and QA a priority will be rewarded with greater market share, user loyalty and increased profitability. In the future, your business may not just be about ensuring your products and services are accessible on mobile. As Hailo has demonstrated, mobile may be your business.

For information on Sogeti UK’s mobile testing offering:
Please visit our website:   
http://www.uk.sogeti.com/Our-Services/Software-Testing-Services/Mobile-Testing-Services/
Tel: 0207 014 8900

James can be contacted as follows:
Tel: +353 (0)1 639 0163
Mobile: +353 (0)86 8372365
Email: james.govan@sogeti.com

AUTHOR: Sogeti UK Marketing team

Posted in: Automation Testing, Digital, functional testing, mobile applications, mobile testing, Mobility, Quality Assurance, Research, Security, User Experience      
Comments: 0
Tags: , , , , , , , , , , , , , , , , , , , , , ,