at-your-serviceI was talking with some developers about overloaded terminology – like innovation, iteration, and agile – and the word “services” came up. Services means different things based on the context of the conversation, it could be an abstract term for some things that needs to provide some data or processing. It could also mean a technology choice, like a web service. A conversation about services with developers and business folks can get confusing since the concepts are much the same when developing a technical or a business service.

The technology of services has been around for a while, and the design pattern is relatively well known. But, I have found that many developers and architects have forgotten that to gain the benefits of a service, they must design the service in a specific way. Often, you find code that implements some framework, like Representational State Transfer (REST) or Web Services Description Language (WSDL), but the design and implementation is just a tightly coupled, single use remote procedure call (RPC).

So, as a public service, here’s a refresher of some service design principles. Keep these items in mind the next time you design or build a service, technical or otherwise:

- Contract first – Determine how the service will provide and how other services can communicate with it. Sometimes a domain-specific language can be helpful in defining the interface, like a modeling language such as Business Process Execution Language (BPEL).

- Loosely coupled/minimise dependencies – the service should stand on its own and not have a dependency on any other application or service. This principle can be implemented by making sure the service exists for one purpose. The service can interact with other services, but can still provide its purpose without any help.

- Reusability – Some services are what I call “chunky” – they are very large interfaces and returns, and look like a ‘Select *’ query from a huge table. Reusability is not just “everything and the kitchen sink”. It means that many intra- and inter-domain will gain benefit from the service.

- Stateless – The service should be able to finish its work and signal when done. The service should not have to keep track of time or state. That’s not to say that a service can’t be asynchronous … but let some other framework or platform service worry about state and let the service provide its function.

- Discoverable – Within organisations, developers will email interface and functionality notes to others to allow them to use the service. Well, that doesn’t work outside the organisation’s walls. Use Universal Description, Discovery Integration (UDDI) to allow others to find the service’s interface and purpose “on-the-fly”.

To read the original post and add comments, please visit the SogetiLabs blog: AT YOUR SERVICE…

Related Posts:

  1. In the Architecture Office: Service-portfoliozation
  2. Service virtualization: new or old news for software testers?
  3. Taking the “Service” out of Customer Service
  4. Sensible sensing as a service, the moment is now


Leigh Sperberg AUTHOR: Leigh Sperberg
Leigh Sperberg has been a consultant in the Dallas office since 2007. During that time, he has served as practice lead for Advisory Services, Microsoft and Business Applications practice. In those roles, he has supported customer engagements in the Texas region and nationally focusing on Microsoft technologies and enterprise architecture.

Posted in: Business Intelligence, communication, Developers, Innovation, Opinion, Technology Outlook      
Comments: 0
Tags: , , , , , , , , , , , ,


At the end of October, just before Halloween, I visited the Spanish city of Barcelona to enjoy a holiday break with my wife and attended the vibrant Microsoft TechEd 2014 conference. In this post, I am happy to share with you my personal top speeches. Below you find the direct links to the 25 selected sessions. Of course you want to start with the opening keynote. At exactly 9 minutes, Joe Belfiore (on the right), Corporate Vice President PC, Tablet and Phone, starts talking about Windows 10. You wouldn’t want to miss that!

Schermafbeelding 2014-11-06 om 10.26.56

Here are my personal TechEd Top 25 Picks from Barca, a little bit clustered but in no particular order, since they all are equally interesting:

  1. Windows 10 Client Goodness with Joe Belfiore
  2. Microsoft Azure and Its Competitors: The Big Picture
  3. A Game of Clouds: Black Belt Security for the Microsoft Cloud
  4. Managing Cybersecurity Threat: Protect, Detect, and Respond
  5. Why a Hacker Can Own Your Web Servers in a Day
  6. Hacker’s Perspective on Your Windows Infrastructure: Mandatory Check List
  7. CSI: Windows – Techniques for Finding the Cause of Unexpected System Takeovers
  8. The Dark Web Rises: A Journey through the Looking Glass
  9. The Ultimate Hardening Guide: What to Do to Make Hackers Pick Someone Else
  10. Advanced Windows Defense
  11. Sneak Peek into the Next Release of Windows Server Hyper-V
  12. Power BI for Office 365 Overview
  13. Microsoft Analytics Platform System Overview
  14. Microsoft Analytics Platform System Deep Dive
  15. Introduction to NoSQL on Azure
  16. Introducing Data Factory: Orchestration on Big Data
  17. Jumpstarting Big Data Projects: Stories from the Field
  18. Windows Phone and Windows 8.1 App Model
  19. Deep Dive into New App Capabilities in Office 365
  20. Three-Way Data Binding in Universal Apps: View, View Model, and Cloud
  21. Kinect in Your Apps – Build to Amaze!
  22. The New Cocktail: 1 Tablet + 1 PC + 1 Phone + 1 Kinect + 1 Wall, Served Up on a Cloud
  23. Windows for Internet of Things Devices
  24. From Vanity to Value, Metrics That Matter: Improving Lean and Agile, Kanban, and Scrum
  25. CAREER DEVELOPMENT: Next Roles, Next Skills, and Staying Relevant in an Evolving IT World

Now you may feel that I over emphasised the security theme a little, but surely you must be triggered by the impressive and inspiring session list above. That’s why you probably can’t wait to explore the entire range of almost 400 Barcelona TechEd 2014 sessions yourself via the conference homepage. Have fun!

To read the original post and add comments, please visit the SogetiLabs blog: MICROSOFT TECHED 2014: MY TOP 25 PICKS FROM BARCELONA

Related Posts:

  1. Calm Technology to Revive Blackberry in 2014
  2. Five common App pitfalls
  3. Considering Windows 8.1 and Mobile devices for the Enterprise with the new Intel Architecture
  4. Windows Store Apps live in the Sandbox


Jaap Bloem AUTHOR: Jaap Bloem
Jaap Bloem is in IT since the PC and now a Research Director at VINT, the Sogeti trend lab, delivering Vision, Inspiration, Navigation and Trends.

Posted in: communication, Microsoft, Opinion, Technical Testing, Technology Outlook      
Comments: 0
Tags: , , , , , , years we’ve built applications that assume the system is only used from a single location. As a result most applications work with local time, with the local time set to the time zone the application lives in. So an application of one of our Dutch customers would run in UTC/GMT +1, whereas the reservation site of a Las Vegas hotel would run in Pacific Standard Time (UTC/GMT-8) or Pacific Daylight Time (UTC/GMT-7) depending on the time of the season. You could think that there is no problem, after all the systems work as they are supposed to. There are however at least two problems.

Applications are interconnected

Suppose the application of our Dutch customer would interact with the reservation system of the Las Vegas system, for instance to get information about the latest time a reservation can be cancelled. The systems would need to agree which time to use, and make a conversion when necessary. That is possible but cumbersome, for instance because Daylight Saving Time starts and end on different days.

Time zone is not the same on every machine

If we move an application to another machine, we have to be sure the time zone is the same on the new machine, otherwise the chance is pretty good the application runs into problems. Any operation comparing stored time data against local time would yield different results.

Cloud Platform Time

In Cloud platforms such as Microsoft Azure, all machines use the same time: UTC. And when using their PaaS instances, Microsoft recommends not changing that (see The best solution is to use UTC anywhere where date/time stored, queried, or manipulated. Only format date/time as local time for input or output. UTC is the universal time zone: Cloud Standard Time (CST).

To read the original post and add comments, please visit the SogetiLabs blog: CLOUD STANDARD TIME (CST)

Related Posts:

  1. The things Cloud does not solve
  2. Is Cloud a return to the Stone Age?
  3. Cloud usage flavors for Development and Test teams
  4. Cloud and Open source go hand in hand


Michiel van Otegem AUTHOR: Michiel van Otegem
Michiel van Otegem is Principal Architect for the Microsoft Business Line at Sogeti Netherlands. In that role he advises clients on strategy and architecture for the Microsoft platform in conjunction with other technologies, specifically focusing on Cloud and Integration. Although focused on Microsoft, Michiel has broad knowledge of other technologies and integration between technologies.

Posted in: Business Intelligence, Cloud, communication, Digital, Microsoft      
Comments: 0
Tags: , , , , , , ,


Having spent two days at London’s ExCel conference centre at AppsWorld; it was great to see the level of advancement and constant change that is prevalent across various industries. Technology is playing an ever increasing role in industry, and the panel discussions that pulled various contributors from different areas was a great insight to how similar the challenges of keeping up with the consumerisation affect is having on IT programmes.

The conference focussed on open content areas; such as:
-          Developer World
-          Droid World
-          Cloud World
-          Connected Car
-          Gaming World
-          Enterprise World

In addition there were a series of premium tracks that covered a wide array of topics:
-          Mobil Payments & Retail
-          Mobile Strategy & Marketing
-          TV & Multi-Screen Apps
-          API Strategies
-          Wearable App Tech
-          HTML5 Workshops

The use of APIs

As I mentioned, the panel discussions were very beneficial. On Day One, listening to the discussion around “Exploring the business value of APIs – Opening data as a channel for Innovation” was very thought provoking.  With speakers from councils, retailers and product companies; there was a very balanced feel to proceedings. The main takeaway from the session centred on the consistency and availability of message transport. Oliver Ogg (Product Owner of APIs for M&S) focussed on how the company are not only providing digital solutions for their customers; but also, providing solutions for their staff to use in-store to ensure there is a ‘single source of proof’ for customer enquiries. The digital; omni-channel experience; though focussed on the consumer, needs to consider how staff interact with consumers. How the in-store display of message is translated to the consumer on their mobile device or at home on their desktop is key to converting enquiries into product sales.

API conversation; specifically publically available APIs were present across multiple tracks over the two days. Companies are making APIs available in the public domain to encourage innovation in the market. Providing the tools (or guidelines) for developers to be creative in designing new or better ways of completing transactions is actively being encouraged. This was epitomised for me during the open track session presented by Mark Dearnley; HMRC’s Chief Digital and Information Officer. During his session; Mark provided an outline of the Government Digital Strategy, and how over the course of 2015, HMRC’s APIs will be made publically available for developers to ‘make things easy’. HMRC have no desire to control the market; preferring to adopt a natural selection process.

What this means is that if we take the example of Self-Assessment (SA); over the course of 2015; there may be a number of privately developed apps; across mobile platforms that attempt to make SA submission more efficient. During the course of consumerisation, only the best apps will survive. Natural selection will ensue, as app store ratings take effect. Only the most user friendly and easy to use apps will survive, thus reducing the need to control the marketplace. As the APIs are in the public domain; HMRC can control the integration.

CRM Strategies, Push Notifications and App Usage

The session hosted by Patrick Mareuil, the Chief Innovation Officer of Accengage; provided a very good overview of the brand loyalty of consumers with respect to app usage. Some highlights of the statistics shared, provide interesting reading:
-          20% of users access mobile apps only once
-          40% of users access the app between 1 and 3 times
-          40% access apps 11 times or more

These statistics are interesting, as looking at them at face value; we can see a lot of missed potential in terms of consumer engagement.

From a testing perspective, the overview of the way push notifications are used outlined a number of use case scenarios that companies; such as Sogeti can assist with productionising apps ready for general use.
Concentrating on the right message, at the right time, in the right place on the right channel is an important feature of maximising conversion of message. From a testing perspective, being able to replicate these scenarios; will provide customers with the right data to complement their digital marketing strategy. As with all approaches, there is a fine-line between optimising the interaction with customers and over-kill. Too much interaction and prompting the user, can also have a negative effect on a consumers willingness to buy.

In addition, the way in which this marketing activity interacts with other applications on the mobile device should be considered. If a user is playing level 105 of Candy Crush, and at that key moment of completing the level, a push notification interrupts their enjoyment, this again could cause negative feedback. Balancing the need for interacting and promoting offers with not interfering with the consumers day to day use of smartphones will need to be covered by the test scenarios that constitute the scope of a release.  Throw into the equation the different approaches to notifications across device platforms, and the scope of testing increases exponentially to ensure the maximum consistency of message across platforms to keep the user experience standard.

Proximity Beacons

A number of the tracks either showcased or made reference to the use of beacon technology as a means to delivering up to date messages and special offers to customers based on their location within a store or theme park.

The use of the technology does; in my opinion counteract the intrusive nature of the audience; as the consumer will be captive and in the right mindset to take on board the advertising messages. Some of the challenges that were outlined during the various talks centred upon the proximity challenges of the technology. In an expansive space; such as a theme park, there are unlikely to be beacons that interfere with each other’s transmission; however; how do companies ensure that in relatively small retail stores, the use of beacons is appropriate and displays the right message at the right time?

It was this example that outlined to me one of the key challenges to the testing of the beacon. How do you replicate on a large scale? If you set-up a test lab, with a selection of beacons; do you lose the desired proximity locations in the live; store environment? Is it sufficient to test using a small selection of beacons, to conduct interruption testing scenarios?

This is a very real consideration that companies need to consider when introducing new technologies to their digital marketing strategies.

Wearable Technology

References to the Internet of Things and wearable’s, brought some interesting viewpoints; but for me the best and perhaps not unsurprising summary of this area came from the session on “Privacy and the adoption of wearable technology”. In this session, the key message was that most; if not all policies around data security and protection apply to all devices. Securing the transport of message from back-end system through to ‘thing’ must follow the same policy and legislation.

For me; the same can be said in terms of the development of the ‘thing’ and also the testing of such devices. Validating the message transport, identifying weaknesses and vulnerabilities remains the challenge. Validating the display and user experience will require testing; developing Omni channel automation frameworks that maximise coverage, whilst controlling the amount of maintenance will appeal to companies as the industry matures. This is certainly a key area of development that I am overseeing in the Sogeti Studio.  In the coming months; we at Sogeti hope to be able to demonstrate these service innovations, to provide customers with an alternative approach to the current mode of operation.

The rise of Crowdsourcing – Testing device compatibility

Device fragmentation; specifically within the Android landscape, raises a number of challenges with regards the age old question of “How much testing is enough?”
Speaking to a number of companies; including app development agencies at the conference the challenges were very similar. How do you make sure that the apps released are compatible with the devices? Some of the answers were the use of emulators, to provide the breadth of coverage complimenting this with a top 10 physical device list to perform the depth of coverage. Others mentioned the reliance on crowdsourcing, booking slots to open up the scope of testing on real devices, seems to be becoming a popular supplementary approach to release testing.

When we add in to the mix operating system platforms and screen resolution, there needs to be a more robust way of achieving the right level of quality. Tools vendors need to look at ways of replicating the user interactions in a standard manner, to provide options in the marketplace.

All in all, the conference was very thought provoking, and has certainly provided a number of takeaways regarding how we at Sogeti can answer some of these challenges through the extension of the current offerings within the Sogeti Studio and developing models that can improve testing coverage on devices, through complimenting emulation with physical device testing and creating Omni channel automation frameworks that promote efficiency in the test cycles.

AUTHOR: Daryl Searle
Daryl is a Delivery Director at Sogeti UK, primarily working with financial services clients. He is both a mobile and an agile subject matter expert.

Posted in: API, Crowdsourcing, Developers, Innovation, Wearable technology      
Comments: 0
Tags: , , , , , , ,


Though many organizations consider the cloud to be a perimeter-less environment, strong security postures must be in place at certain entry points so that threats are disrupted and eradicated at an early stage. Cloud security is vital.Does the cloud have edges? We refer to the cloud as a perimeter-less environment, with workloads moving dynamically through various physical networks and regions. The cloud is interlinked in such a manner that there is no clearly defined edge to it. So what does it really mean to create cloud security at the edge?

To answer this question, let’s use an analogy. When the world was considered a flat landmass, humans thought it had a physical edge. Once we understood that Earth is a globe, the concept of that physical edge was no longer valid. However, from a logical point of view, the world’s landmass is divided into continents, countries, cities, neighborhoods, apartments, houses, etc. People can move around freely among these various locations. However, each area has its own rules of entry to ensure people traversing these locations are checked for positive intent and don’t have a negative impact at the location. Thus, a strong level of investigation at the port of entry becomes critical.

Similarly, the cloud environment is created by a number of networks coming together. The entry points to each of these enterprise networks become critical from a security point of view. Yes, this was always the case; however, the traditional approach to edge security doesn’t work in the cloud environment simply because the cloud requires much more flexibility in terms of allowing workloads to move around. Standard hierarchy-blocking of IP addresses or restricted entry only creates more false positives. In this new context, network security solutions that can carry out deeper inspections are vital to differentiate between a legal workload and a malicious one.

Tightening Cloud Security

A cloud security solution working at the edge needs the following capabilities:

  • Visibility: It is important to provide a detailed visibility to the security administrator on the kind of ingress and egress traffic that traverses the network, specifics of the URL categories visited and their IP reputations.
  • Control: Converting the visibility into relevant action is critical, so the enterprise should be able to define granular Web application policies. Also, blocking interactions with malicious URLs and allowing only the required business access to applications is important. This reduces the circumference for attacks against human vulnerabilities.
  • Protection: Large-bandwidth, deep packet inspection capabilities are required to efficiently handle traffic through the entry points. In a cloud environment, the intelligence behind identifying exploits is paramount. This is done to avoid false positives, thus providing the flexibility the cloud should provide and, more essentially, to gain intelligence to stop mutated and zero-day exploits since the network is directly exposed to the global threat landscape.
  • Multiple Traffic Type Inspection: The solution should be able to carry out the same level of inspection on encrypted traffic and traffic using varied protocols.

To protect your enterprise within the cloud, you need to create a strong security posture at the point of entry. This ensures that you disrupt threats at an early stage of their life cycle and that your enterprise’s cloud security strategy secures it from the edge.

The original post by IBM can be found here: Cloud Security: Protecting Your Enterprise at the Edge

IBMSecurityIntelligence AUTHOR: IBMSecurityIntelligence
Analysis and Insight for Information Security Professionals

Posted in: Business Intelligence, Cloud, Developers, Opinion, Security      
Comments: 0
Tags: , , , , , , , ,


Facebook-Messenger-LogoTechnology practitioners spend entire careers simplifying, be it removing complexity to make code more “elegant” or rationalising bloated application portfolios accumulated over time. The driving factors behind this push are self-explanatory: ease of maintenance, scalability, flexibility and cost. Is there ever a time in technology circles, however, when more is actually better?

Consumer mobile applications create great customer experiences by limiting purpose and features. In fact, when feature creep begins to make apps overly complex they are usually split apart, as in the case of Facebook and Facebook Messenger or Foursquare and Swarm. Creating multiple apps would seem counterintuitive to the enterprise IT department (“we should only have one ERP system, therefore we should only have one app”), yet how are the Facebooks of the world able to pull this off in a cost-effective manner?

The answer lies in digital architectures that are designed to be platform and device agnostic. These service layers are built with performance and scalability in mind, and can facilitate interactions from anything from a native mobile interface to a web browser. Elegance is critical at this layer of the architecture so that an easy to understand and use service catalog (API) can be provided to the teams building the great experiences. The service catalog manages the complexities of interacting with core operational systems, and the apps themselves remain none the wiser.

The consumerisation of IT has placed a premium on user experience, elevating it to new levels of focus and investment in the enterprise. As frightening as it may be for IT departments to see a sprawling landscape of apps spreading across their organisations like wildfire, more can actually be better with enterprise mobile apps. Does it really make sense for a human resource manager and a sales executive to have the same day-to-day mobile experience? If a digital enterprise focuses on architecture and governance it’s possible to stay true to the principles of technology elegance AND enable great mobile experiences for business users with very different needs. That’s the very definition of the power of digital delivered.

To read the original post and add comments, please visit the SogetiLabs blog: TOO MUCH IS NEVER ENOUGH

Related Posts:

  1. Objects Of Desire
  2. Don’t managers deserve beautiful apps?
  3. Does your cool app display your poor operational process?
  4. Digital marketeers : don’t underestimate mobile app projects


Joo Serk Lee AUTHOR: Joo Serk Lee
Joo Serk Lee is a Vice President in Sogeti USA who has spent the past two years architecting major programs at Aspen Marketing Services and at Abbott Laboratories. He in an Enterprise Architect by trade and has spent much of his 15 year career working in and crafting transformation programs featuring complex technologies across a wide range of technologies including Microsoft/Java stacks, mobility and large CRM and ERP solutions.

Posted in: Developers, Human Resources, mobile applications, mobile testing, Mobility, Open Innovation, Opinion, Technical Testing, Technology Outlook      
Comments: 0
Tags: , , , , , , , , ,


Apple-Pay-logoLast September 9th, the new iPhone and the Apple Watch drew the attention during the Apple’s CEO keynotes. But none of them were a revolution: the new iPhone is an incremental innovation and the Apple Watch will find its usefulness only with the applications we will develop for it.

No, the real “revolution”, or at least the most interesting novelty was elsewhere: in the launching of the Apple Pay service and the systematic integration of a NFC chip in all Apple devices that allows everybody to pay without contact with his smartphone.

Apple, without being a pioneer in that area (Google, Starbucks, Telco Operators, Wal-mart and so on did it before… without real success), might well pick up the establishment. Why?

First, the Apple Company has a real know-how in customer’s usage and expectations. With Apple Pay, Apple seeks to give all the guarantees of security to the consumer: credit card information is encrypted, payment is validated using fingerprint recognition, and last but not least, Apple will not have access to the consumer purchase behaviour and history like other companies.

Then, Apple has the capacity to enroll partners to maximize its products dissemination. They convinced the largest banks to adopt its new service by agreeing to share the commission on each transaction if banks agreed to use Apple Pay. Why would banks accept such a deal? It is because Apple has a potential of 800 million of iTunes users which registered credit card information into Apple’s music platform. So, it is like, follow me or get left behind! This enormous customer database increases the success rate of Apple Pay. In addition, Apple has built partnerships with merchants, like McDonald’s for example.

Therefore, the company founded by Steve Jobs put all the advantages on its side to set the rules of mobile payment.

To read the original post and add comments, please visit the SogetiLabs blog: APPLE BETS ON MOBILE PAYMENT

Related Posts:

  1. Top 10 post: “Mobile is not apps or tools, Mobile is a strategy”
  2. Mobile is not apps or tools, Mobile is a strategy
  3. Mobile is not a strategy – go adjectively mobile!
  4. Considering Windows 8.1 and Mobile devices for the Enterprise with the new Intel Architecture

Philippe Andre AUTHOR: Philippe Andre
Philippe André is an expert within Business and IS architecture, Service Architecture, System modelling and Soil science. Philippe is a Certified Enterprise Architect (L4) and TOGAF9 certified. Philippe’s mission is to help clients to make the best decision as far as business and IT alignment is concerned. He works as a link between architecture and design team, making sure that architecture decisions and directions are applied on the field.

Posted in: 4G, Business Intelligence, Developers, Digital, Innovation, mobile applications, Open Innovation, Opinion      
Comments: 0
Tags: , , , , , , , ,


doctor-02“What do you want to be when you grow up?” Becoming a doctor, a lawyer, a policeman or an architect are examples of popular answers to this well-known question, since all of these jobs have been traditionally conceived as contributors to social challenges. Nowadays, testing is also a profession that impacts society, although this fact is not so evident at first sight. Over the last decades, software has become an intrinsic part of business and society. In the United States, for example, the National Institute of Standards and Technology reported in 2002 that software errors cost the U.S. economy an estimated $59.5 billion annually.

During the last years, testing methods and techniques have evolved testing from simply an art to a specific engineering profession. Systems become more complex day by day. Consequently, the complete range of possible scenarios cannot be fully tested because we usually have limited resources. In this context, experienced testing skills and methodologies are required to enhance testing productivity. Furthermore, testing activities need to be integrated in complex software lifecycles, and defects need to be communicated and managed in alignment with requirements, development and maintenance activities. Therefore, several questions arise: “Are testers simply users?”, “Are testers technicians?”, “Are testers domain experts?”. During the last year I’ve been asking several colleagues around me why they call themselves “testers” and the conclusion is that tester profiles are a peculiar (and not easy to find) combination of skills that they need to play together. In summary, testers fit into so called T-shaped professionals.

T-Shaped profiles were initially described by Leonard-Barton in 1995, but the same idea applies nowadays for testers. The T represents vertical and horizontal skills. Vertical skills correspond to abilities that go in deep in software and testing techniques and methods, in conjunction with expertise in one or more specific domains and technologies. Horizontal skills are cross-wise abilities that allow for the performing of testing in many different contexts (diverse frameworks, configurations, technologies, tools, organizations, etc.) with rigor and efficiency (project management, organizational and communication skills, etc.). Extending the T in both vertical and horizontal directions determines the progress of testing professionals.

As proposed in the White Paper “Succeeding through service innovation. A service perspective for education, research, business and government education” published in 2007 by the University of Cambridge, software quality education and business training should focus on T-shaped professionals. If we are able to move forward in this direction, maybe in the near future we will be able to hear young children saying “Yes, I would like to be a testing professional!”

To read the original post and add comments, please visit the SogetiLabs blog: “WHAT DO YOU WANT TO BE WHEN YOU GROW UP?” A TESTING PROFESSIONAL!

Related Posts:

  1. What the *BLEEP* do we know about Software Testing?
  2. Testing: are Agility and Complex projects good friends?
  3. Testing is dead, long live quality
  4. Modelization of Automated Regression Testing, the ART of testing



Albert Tort AUTHOR: Albert Tort
Albert Tort is a Software Control & Testing specialist in Sogeti Spain. Previously, he served as a professor and researcher at the Services and Information Systems Engineering Department of the Universitat Politècnica de Catalunya-Barcelona Tech. As a member of the Information Modeling and Processing (MPI) research group, he focused his research on conceptual modeling, software engineering methodologies, OMG standards, knowledge management, requirements engineering, service science, semantic web and software quality assurance.

Posted in: communication, Developers, Technical Testing, Technology Outlook, Test Driven Development      
Comments: 0
Tags: , , , , ,


pridenovel“It is a truth universally acknowledged that a developer in possession of a good software product must be in want of a tester.”

Last week while I was re-reading one of the best books ever written “Pride and Prejudice” (by Jane Austen) a crazy idea came to mind: the relationship between a developer and a tester is quite similar to the one described by Austen in her novel.

For those who haven’t had the good fortune of reading this wonderful novel yet (I strongly encourage you to do so as soon as possible), here is a brief summary:

Elizabeth Bennett is a witty and independent young woman. Mr. Darcy is a rich gentleman. Her early determination to dislike him is a prejudice only matched by the folly of his arrogant pride. But their first impressions and misunderstandings give way to true feelings.

So, how can we relate these aspects? Let me explain it with an example of my own experience.

Last year my test team (let’s call them Miss Tester), had to test an application developed by a group of programmers led by a functional analyst (let’s call them Mr. Developer). This application was very, very important for the client (it was for billing management) and the development team had worked hard on it and had done its own tests in a very thorough way. However, the client insisted on having it tested by Miss Tester, and this decision was not well received by Mr. Developer, who considered it unnecessary since he was very proud of his product, just like the proud Mr. Darcy. Miss Tester, on the contrary, based on previous experiences with other applications, considered it completely indispensable. She was a little predisposed to prejudices, just like Miss Bennett was.

Miss Tester had a problem: even though she was very smart and witty, there wasn’t enough documentation for her to understand how the programme worked. So, she had to ask Mr. Developer. But she did in a very belligerent way, implying that he may not know the expected behavior of the application. At the beginning he was really haughty and reluctant to help her, which made the relationship between them very difficult. Moreover, every time Miss Tester reported a defect, Mr. Developer’s first reaction was to try to find a way to cancel it, demonstrating it was not a defect but a misunderstanding of the functionality.

The situation came to a breaking point and both of them realised that, in order to succeed, they had to find a way to work together and overcome their pride and their prejudices. So, once Miss Tester discovered that the application didn’t have as many defects as she thought and Mr. Developer understood that the purpose of the test team wasn’t underestimating his work, they could release a good software product to the client, who was really satisfied with it.

In conclusion, the relationship between a developer and a tester is never easy. But, since our goal is the same, we must try to get on well and avoid pride and prejudices.

To read the original post and add comments, please visit the SogetiLabs blog: PRIDE AND PREJUDICE AND SOFTWARE APPLICATIONS

Related Posts:

  1. What the *BLEEP* do we know about Software Testing?
  2. Service virtualization: new or old news for software testers?
  3. Lower Software Costs with the Right Package
  4. The art of writing test cases



AUTHOR: Paloma Rodriguez
Paloma Rodriguez has been Test Engineer for Sogeti Group since 2011. In this role, she manages testing projects and participates in various publications and training activities.

Posted in: Behaviour Driven Development, Big data, Business Intelligence, communication, Developers, Human Interaction Testing, Opinion, project management      
Comments: 0
Tags: , , , , ,


Autism_Awareness_by_thisfleshavengedThe recent blogpost “this piece of data is lying!” (part 1 and2) showed that the devil hides in the details and what we observe from data (i.e. correlation) could be false. This article will elaborate on that and give some more examples of statistical bias that we must be all aware of in order to not be fooled with big data statistical results. The content of this series is inspired by a data science course of Washington University in the USA (Bill Howe, Data science, autumn 2012).

What about big data and statistics

Let us begin with a Bradley Efron statement: “Classical statistics was fashioned for small problems, a few hundred data points at most, a few parameters.” In addition, “The bottom line is that we have entered an era of massive scientific data collection, with a demand for answers to large-scale inference problems that lie beyond the scope of classical statistics.”

What observations are wrong here? The fact is that you can find lots of positive correlation between data, for example:

  • Loss of Internet Explorer and murder rate in the US between 2006 and 2011 (Cited by Bill Howe)
  • Number of police officers and number of crimes (Glass & Hopkins, 1996)
  • Amount of ice cream sold and deaths by drownings (Moore, 1993)
  • Stork sightings and population increase (Box, Hunter, Hunter, 1978)

These examples support the notion put forward by Vincent Granville who states that “when you search for patterns in very, very large data sets with billions or trillions of data points and thousands of metrics, you are bound to identify coincidences that have no predictive power”.  So the problem is the introduction of statistical bias that we should discover and remove before begin analysing a data set. This series will give you some insight about some common bias coming with the traditional statistical method for analysing (big) data.

The publication bias

Publication bias was reported long time ago, but there is new evidence which suggests that this bias is increasing.

What is an example of publication bias? In the field of biomedical research, autism spectrum disorder publications suggest that in some areas negative results are completely absent. What does that really mean? That means that you are only publishing papers that show significant positive gains. In fact, when we try several treatments and none of them work except for one, we generally publish one article, not 20!

In the case of studies that cover more and more population (i.e. population size of the experiment increase) over time, when you repeat the experience, normally, you will have more accurate results (you gain more statistical power due to the size of the population; it is statistic rule). But usually, you will notice that the actual effect reported by meta-analysis is regressing to zero, like in the next figure (Bill Howe).

But if you take all the experiments that could have been reported through publication, you will have another pattern.

In fact, we see that this mysterious decline effect does not exist and is here just because of publication bias: only part of the results (in our example, positive results) are reported and published.

In the next article, we will look at the mysterious effect size.

To read the original post and add comments, please visit the SogetiLabs blog: BIG DATA ANALYSIS REQUIRES SOME BACK TO BASIC STATISTICS PRINCIPLES

Related Posts:

  1. Big data and trends analysis: be aware of the human factor!
  2. Machine learning: the next big thing in big data for our everyday life?
  3. Top 10 post: “Machine learning: the next big thing in big data for our everyday life?”
  4. This piece of data is lying! (2/2)



Philippe Andre AUTHOR: Philippe Andre
Philippe André is an expert within Business and IS architecture, Service Architecture, System modelling and Soil science. Philippe is a Certified Enterprise Architect (L4) and TOGAF9 certified. Philippe’s mission is to help clients to make the best decision as far as business and IT alignment is concerned. He works as a link between architecture and design team, making sure that architecture decisions and directions are applied on the field.

Posted in: Big data, Business Intelligence, communication, Opinion, Publications, Research      
Comments: 0
Tags: , , , , , , , , ,