SOGETI UK BLOG

VR1Virtual Reality aims at improving the user experience on a daily basis. Virtual Reality adapting itself to real time experiences is what you call augmented reality. This technology has had some interest, but it hasn’t been  used by everyone. Simply virtualise the environment you see in real time and you’re in! Welcome to the world of augmented reality!

Gaming industries have understood how to use augmented reality. Their attempts at augmented reality games were a great success and continue to rage among geeks: The Drakerz game, for example, relies on augmented reality with its gameplay. The use of playing cards as support for virtual reality enables a paperless playbook, coaching beginners to advanced players. By analogy to what has been offered by virtual technologies (serious gaming, design, preview, etc.), imagine what could be the industrial applications of augmented reality: Transpose the Drakerz game to a paperless implementation which replaces the average user manual. Your audience may be apprentices or experienced in their skills, but with augmented reality you can reinforce them with information displayed at the right time, right place. Yes, Stark already did it…

VR3 

Ingress by Niantic Labs

Another game, same success : Ingress developed by Ingress Niantic Lab allows the overlaying of a reality allowing players to meet and discover new places and points of interest for the capture of new territories. The game comes with the player in the developed and self-sustaining world.Why invest in these technologies? Do not miss the boat! Use information like Ingress to report hostile situations and simulate your transportation and infrastructures means. Mix the Reality & Virtual world for more than a real immersion in your simulation.

VR4 

Iirmos Project

In the same way that we could test our systems SIL & HIL (Hardware In the Loop, Software In the Loop), we can now do immersion test by using Augmented Reality and create the RIL (Real In the Loop).

You may have to find the right mix between reality and virtual. It’s up to you!

To read the original post and add comments, please visit the SogetiLabs blog: SIMULATE & AUGMENT YOUR REALITY!

Related Posts:

  1. The Augmented Reality … for which Reality?
  2. Choose your own reality
  3. Innovation and IT services company: buzz or reality?
  4. Google cardboard or the day when VR became accessible

 

 

Thierry Dolmiere AUTHOR: Thierry Dolmiere
hierry Dolmière is a graduate in engineering physics instrumentation. He has been working on simulation projects since 2006. He’s now in charge of R&D innovative projects coordination in Toulouse, France.

Posted in: Business Intelligence, communication, Developers, Innovation, Open Innovation, Opinion, Virtualisation      
Comments: 0
Tags: , , , , , ,

 

23_q74tnWhen organising testing, the test manager adhering to the traditional view on testing had to structure the testing activities in a hierarchical way, based on quality characteristics. But a test manager often distinguished various stages too. Defined terms such as Test Level, Test Type, Test Phase and Test Stage were often used.

In today’s view on testing, the people involved in testing are hesitant to use the word ‘Test Level’ since it seems to imply that various groups, based on various hierarchical responsibilities, will perform various testing tasks without any interaction between these test levels. Moreover, many testers have often struggled to distinguish between Test Levels and Test Types. And a Test Stage – is that identical to a Test Level or not? What should be our focus when organising testing?

All testing activities must collectively cover all important areas and aspects of the system under test – that is the main objective. To cope with the confusion around how to distinguish testing tasks, we introduce the term Test Variety.

The term Test Variety aims at making all stakeholders aware that there will always be different needs for testing, and therefore different test varieties will have to be organised. Whether these are organised separately or combined depends on the situation. There may be many reasons for having different test varieties. For example, there are different stakeholders who ought to be involved; programmers have a different focus in their testing than business representatives do. This is often related to responsibility and accountability for testing activities. The quality characteristics that have to be addressed form another reason for distinguishing test varieties. Maintainability for example, demands totally different testing activities than usability does.

Traditionally, different aspects were separately approached as a group of testing activities that had been brought together in a test level. Many people know the ‘functional acceptance test’, a name that already indicates that testing was not complete because it obviously didn’t focus on non-functional aspects. In the new view, functional and non-functional testing can be seen as test varieties. Depending on the circumstances, such as the application lifecycle model that is used, these test varieties are organised either together or separately. The main concern is that all relevant test varieties are carried out one way or another.

So let’s use the term Test Variety, to make everybody involved aware of the fact that there are different points of view towards testing activities, and we can make sure that the interests of all stakeholders will be covered by addressing these in a well-considered way.

To read the original post and add comments, please visit the SogetiLabs blog: TEST LEVELS? TEST TYPES? TEST VARIETIES!

Related Posts:

  1. The art of writing test cases
  2. Six corner stones of a Test Center of Excellence
  3. Testing is dead, long live quality
  4. Is it a good idea solving test environment problems “later”?

 

Rik Marselis AUTHOR: Rik Marselis
Management Consultant, Quality & Testing

Posted in: Application Lifecycle Management, Big data, Business Intelligence, Developers, Opinion, Testing and innovation      
Comments: 0
Tags: , , , , , , , ,

 

Ebola#Ebola spreads faster on Twitter than the real disease, and the Earth is living the nightmares shown many times by Hollywood.

How can Big Data help us?

A few  days before Ebola was declared epidemic, HealthMap developed an algorithm to detect and predict how the haemorrhagic fever would evolve. Scientists are not yet convinced about its potential, but it is now clear to the global community that IT can really help.

Collecting tweets, news, calls to emergency services, airline services etc., this algorithm can predict the evolution of the disease. Time is key in this kind of emergency and big data alone will not be enough. The advantage of an “algorithmic approach” is that the problem can be attacked from all the angles. For example, Nigerian Minister of Communication Technology Omobola Johnson highlighted how technology and social media have been key success criteria to her country being Ebola-free.

Are these entrepreneurs heard by the community? Is this knowledge cross-shared in order to obtain some good for the entire world?
We can see some first steps: Microsoft offered its Azure platform, the Gates foundation already gave $50M, and IBM created dedicated hotlines to help people (through calls and texts) by calling ambulances and highlighting power problems, but the impression is that this is still too specific to the IT world.

My final question has, however, a much larger scope and – unfortunately – I don’t have an answer…

We are today surrounded by all the evolutions that technology can bring us (if you read French I strongly suggest this, where Google Glasses are used during remote surgery) but are we sure that we are not misleading the content with the container?

The technology is just the container, the machine we can use to achieve a purpose, but WE are the value and WE are the change. I like to see that at Sogeti, we don’t forget this and we really leverage IT in order to “give back” to the others, because – in the end – we are the real Big Data.

To read the original post and add comments, please visit the SogetiLabs blog: WILL BIG DATA SAVE US FROM EBOLA? WHERE DOES IT PROVIDE ITS REAL VALUE?

Related Posts:

  1. Save the Planet with Machine Learning
  2. Machine learning: the next big thing in big data for our everyday life?
  3. Top 10 post: “Machine learning: the next big thing in big data for our everyday life?”
  4. Big data and trends analysis: be aware of the human factor!

 

Manuel Conti AUTHOR: Manuel Conti
Manuel Conti has been at Sogeti since 2010. With a technical background he leads onshore and offshore teams mainly in the use of the Microsoft SharePoint platform, building intranet and internet web sites.

Posted in: Azure, Big data, communication, Microsoft, Opinion, Research      
Comments: 0
Tags: , , , , , ,

 

Next generation security professionalIf you’re a security professional you’ve probably heard that we could be short of 2 million security professionals worldwide by 2017. At least that’s what speakers at the Digital Skills Committee at the House of Lords in London said recently. The basic thought is that we’re not training enough students to be security professionals and there is an increasing need for security professionals as we face further reliance on the Internet for banking, commerce and entertainment.

Add to these pressures the expansion of Internet enabled devices, the Internet of Things, and you can easily see a shortage of 2,000,000 professionals within the next two years. The only problem is, we may be underestimating the number really needed by a factor of 50-100%.

Finding the Right People

Ask anyone who’s tried to hire a qualified security professional within the last five years and you’ll hear a story about the difficulty of finding the right people. Finding the right skill and the right person, even for an entry level security role is difficult. And it only gets more painful when you’re looking for someone more experienced or with a specific skill set that’s in high demand. It drives up the salaries in the field, it causes longer search times for candidates and it basically sets unrealistic expectations for new people coming into the field.

But the real reason we’re likely to suffer an even higher deficit in security professionals is two-fold. First is the concept of technical debt, more specifically security debt. Security has been an add-on for decades, something that was either ignored or added as an afterthought, which has only really been changing in recent times. We haven’t put the resources necessary in place to properly protect many of our systems, and that security debt has been gathering interest silently in the background. As we start digging into these problems, it’s likely we’ll find they are much bigger than they appeared because past deficits will be revealed.

The second, closely related issue is a rising storm of issues in older software, which are creating a new norm in security vulnerabilities. If you work in security and haven’t lost sleep to Heartbleed, Shellshock, Poodle or the latest bug in Drupal, you should consider yourself very, very lucky. And as the industry starts looking deeper into the old software we all rely on, as researchers re-examine foundational code that makes the Internet run, we’re going to have more emergency patches issued and lose more sleep to responding to the fire drills. The stress caused by this increase in emergency class events means we can’t continue doing incident response as normal, we will need new processes, new communication channels, and, most importantly, more people to be involved so that we don’t burn out the few people we currently have.

Making it Work

Long term, education is one of the biggest solutions to the deficit of security professionals, but it’s not going to help us within the next two years. The reality is that it takes more than two years to get a degree created and running in any discipline and while there are quite a few schools who currently have a security curriculum, it’s simply not enough. And a degree doesn’t make a security professional; there’s a certain level of curiosity tempered by cynicism and disbelief of the status quo that are needed. There are any number of challenges a security professional faces in their career, but one of the underlying threads is that you have to be prepared to dig a little deeper than the data suggests on the surface.

Short term, what we really need is to work harder at making security an integral part of business practices. We’ve talked about this integration for years, but at all too many companies, it’s still just something that we play lip service to. There are islands of support in development groups or IT, but how many companies can really say they have a security practice that has supporters and integration everywhere from the CEO down to marketing and sales? If we can’t find new people to hire in the near future, we need to modify our processes and procedures to take advantage of the people we do have outside the security team. If your incident response plans don’t include marketing for communication, sales for explaining the issues to your customer and the CEO for making the tough calls, then there’s still work to do around integrating with the business.

Eventually, market pressures will increase the number of people choosing security as a career, but it’s not going to be quick and it’s not going to be in the next two years. In the meantime, it’s going to take leadership that can make the most of the resources we do have and reaching outside what we traditionally think of as the security team. And the front line security professionals of today are going to have to become the leaders of tomorrow to teach all the new people coming in from colleges and farther afield.

The original post by IBM can be found here: Where Will the next Generation of Security Professionals Come From?

IBMSecurityIntelligence AUTHOR: IBMSecurityIntelligence
Analysis and Insight for Information Security Professionals

Posted in: Human Resources, Internet of Things, Security      
Comments: 0
Tags: , , , , ,

 

at-your-serviceI was talking with some developers about overloaded terminology – like innovation, iteration, and agile – and the word “services” came up. Services means different things based on the context of the conversation, it could be an abstract term for some things that needs to provide some data or processing. It could also mean a technology choice, like a web service. A conversation about services with developers and business folks can get confusing since the concepts are much the same when developing a technical or a business service.

The technology of services has been around for a while, and the design pattern is relatively well known. But, I have found that many developers and architects have forgotten that to gain the benefits of a service, they must design the service in a specific way. Often, you find code that implements some framework, like Representational State Transfer (REST) or Web Services Description Language (WSDL), but the design and implementation is just a tightly coupled, single use remote procedure call (RPC).

So, as a public service, here’s a refresher of some service design principles. Keep these items in mind the next time you design or build a service, technical or otherwise:

- Contract first – Determine how the service will provide and how other services can communicate with it. Sometimes a domain-specific language can be helpful in defining the interface, like a modeling language such as Business Process Execution Language (BPEL).

- Loosely coupled/minimise dependencies – the service should stand on its own and not have a dependency on any other application or service. This principle can be implemented by making sure the service exists for one purpose. The service can interact with other services, but can still provide its purpose without any help.

- Reusability – Some services are what I call “chunky” – they are very large interfaces and returns, and look like a ‘Select *’ query from a huge table. Reusability is not just “everything and the kitchen sink”. It means that many intra- and inter-domain will gain benefit from the service.

- Stateless – The service should be able to finish its work and signal when done. The service should not have to keep track of time or state. That’s not to say that a service can’t be asynchronous … but let some other framework or platform service worry about state and let the service provide its function.

- Discoverable – Within organisations, developers will email interface and functionality notes to others to allow them to use the service. Well, that doesn’t work outside the organisation’s walls. Use Universal Description, Discovery Integration (UDDI) to allow others to find the service’s interface and purpose “on-the-fly”.

To read the original post and add comments, please visit the SogetiLabs blog: AT YOUR SERVICE…

Related Posts:

  1. In the Architecture Office: Service-portfoliozation
  2. Service virtualization: new or old news for software testers?
  3. Taking the “Service” out of Customer Service
  4. Sensible sensing as a service, the moment is now

 

Leigh Sperberg AUTHOR: Leigh Sperberg
Leigh Sperberg has been a consultant in the Dallas office since 2007. During that time, he has served as practice lead for Advisory Services, Microsoft and Business Applications practice. In those roles, he has supported customer engagements in the Texas region and nationally focusing on Microsoft technologies and enterprise architecture.

Posted in: Business Intelligence, communication, Developers, Innovation, Opinion, Technology Outlook      
Comments: 0
Tags: , , , , , , , , , , , ,

 

At the end of October, just before Halloween, I visited the Spanish city of Barcelona to enjoy a holiday break with my wife and attended the vibrant Microsoft TechEd 2014 conference. In this post, I am happy to share with you my personal top speeches. Below you find the direct links to the 25 selected sessions. Of course you want to start with the opening keynote. At exactly 9 minutes, Joe Belfiore (on the right), Corporate Vice President PC, Tablet and Phone, starts talking about Windows 10. You wouldn’t want to miss that!

Schermafbeelding 2014-11-06 om 10.26.56

Here are my personal TechEd Top 25 Picks from Barca, a little bit clustered but in no particular order, since they all are equally interesting:

  1. Windows 10 Client Goodness with Joe Belfiore
  2. Microsoft Azure and Its Competitors: The Big Picture
  3. A Game of Clouds: Black Belt Security for the Microsoft Cloud
  4. Managing Cybersecurity Threat: Protect, Detect, and Respond
  5. Why a Hacker Can Own Your Web Servers in a Day
  6. Hacker’s Perspective on Your Windows Infrastructure: Mandatory Check List
  7. CSI: Windows – Techniques for Finding the Cause of Unexpected System Takeovers
  8. The Dark Web Rises: A Journey through the Looking Glass
  9. The Ultimate Hardening Guide: What to Do to Make Hackers Pick Someone Else
  10. Advanced Windows Defense
  11. Sneak Peek into the Next Release of Windows Server Hyper-V
  12. Power BI for Office 365 Overview
  13. Microsoft Analytics Platform System Overview
  14. Microsoft Analytics Platform System Deep Dive
  15. Introduction to NoSQL on Azure
  16. Introducing Data Factory: Orchestration on Big Data
  17. Jumpstarting Big Data Projects: Stories from the Field
  18. Windows Phone and Windows 8.1 App Model
  19. Deep Dive into New App Capabilities in Office 365
  20. Three-Way Data Binding in Universal Apps: View, View Model, and Cloud
  21. Kinect in Your Apps – Build to Amaze!
  22. The New Cocktail: 1 Tablet + 1 PC + 1 Phone + 1 Kinect + 1 Wall, Served Up on a Cloud
  23. Windows for Internet of Things Devices
  24. From Vanity to Value, Metrics That Matter: Improving Lean and Agile, Kanban, and Scrum
  25. CAREER DEVELOPMENT: Next Roles, Next Skills, and Staying Relevant in an Evolving IT World

Now you may feel that I over emphasised the security theme a little, but surely you must be triggered by the impressive and inspiring session list above. That’s why you probably can’t wait to explore the entire range of almost 400 Barcelona TechEd 2014 sessions yourself via the conference homepage. Have fun!

To read the original post and add comments, please visit the SogetiLabs blog: MICROSOFT TECHED 2014: MY TOP 25 PICKS FROM BARCELONA

Related Posts:

  1. Calm Technology to Revive Blackberry in 2014
  2. Five common App pitfalls
  3. Considering Windows 8.1 and Mobile devices for the Enterprise with the new Intel Architecture
  4. Windows Store Apps live in the Sandbox

 

Jaap Bloem AUTHOR: Jaap Bloem
Jaap Bloem is in IT since the PC and now a Research Director at VINT, the Sogeti trend lab, delivering Vision, Inspiration, Navigation and Trends.

Posted in: communication, Microsoft, Opinion, Technical Testing, Technology Outlook      
Comments: 0
Tags: , , , , , ,

 

www.cdiscount.com/For years we’ve built applications that assume the system is only used from a single location. As a result most applications work with local time, with the local time set to the time zone the application lives in. So an application of one of our Dutch customers would run in UTC/GMT +1, whereas the reservation site of a Las Vegas hotel would run in Pacific Standard Time (UTC/GMT-8) or Pacific Daylight Time (UTC/GMT-7) depending on the time of the season. You could think that there is no problem, after all the systems work as they are supposed to. There are however at least two problems.

Applications are interconnected

Suppose the application of our Dutch customer would interact with the reservation system of the Las Vegas system, for instance to get information about the latest time a reservation can be cancelled. The systems would need to agree which time to use, and make a conversion when necessary. That is possible but cumbersome, for instance because Daylight Saving Time starts and end on different days.

Time zone is not the same on every machine

If we move an application to another machine, we have to be sure the time zone is the same on the new machine, otherwise the chance is pretty good the application runs into problems. Any operation comparing stored time data against local time would yield different results.

Cloud Platform Time

In Cloud platforms such as Microsoft Azure, all machines use the same time: UTC. And when using their PaaS instances, Microsoft recommends not changing that (see bit.ly/azuretimezone). The best solution is to use UTC anywhere where date/time stored, queried, or manipulated. Only format date/time as local time for input or output. UTC is the universal time zone: Cloud Standard Time (CST).

To read the original post and add comments, please visit the SogetiLabs blog: CLOUD STANDARD TIME (CST)

Related Posts:

  1. The things Cloud does not solve
  2. Is Cloud a return to the Stone Age?
  3. Cloud usage flavors for Development and Test teams
  4. Cloud and Open source go hand in hand

 

Michiel van Otegem AUTHOR: Michiel van Otegem
Michiel van Otegem is Principal Architect for the Microsoft Business Line at Sogeti Netherlands. In that role he advises clients on strategy and architecture for the Microsoft platform in conjunction with other technologies, specifically focusing on Cloud and Integration. Although focused on Microsoft, Michiel has broad knowledge of other technologies and integration between technologies.

Posted in: Business Intelligence, Cloud, communication, Digital, Microsoft      
Comments: 0
Tags: , , , , , , ,

 

Having spent two days at London’s ExCel conference centre at AppsWorld; it was great to see the level of advancement and constant change that is prevalent across various industries. Technology is playing an ever increasing role in industry, and the panel discussions that pulled various contributors from different areas was a great insight to how similar the challenges of keeping up with the consumerisation affect is having on IT programmes.

The conference focussed on open content areas; such as:
-          Developer World
-          Droid World
-          Cloud World
-          Connected Car
-          Gaming World
-          Enterprise World

In addition there were a series of premium tracks that covered a wide array of topics:
-          Mobil Payments & Retail
-          Mobile Strategy & Marketing
-          TV & Multi-Screen Apps
-          API Strategies
-          Wearable App Tech
-          HTML5 Workshops

The use of APIs

As I mentioned, the panel discussions were very beneficial. On Day One, listening to the discussion around “Exploring the business value of APIs – Opening data as a channel for Innovation” was very thought provoking.  With speakers from councils, retailers and product companies; there was a very balanced feel to proceedings. The main takeaway from the session centred on the consistency and availability of message transport. Oliver Ogg (Product Owner of APIs for M&S) focussed on how the company are not only providing digital solutions for their customers; but also, providing solutions for their staff to use in-store to ensure there is a ‘single source of proof’ for customer enquiries. The digital; omni-channel experience; though focussed on the consumer, needs to consider how staff interact with consumers. How the in-store display of message is translated to the consumer on their mobile device or at home on their desktop is key to converting enquiries into product sales.

API conversation; specifically publically available APIs were present across multiple tracks over the two days. Companies are making APIs available in the public domain to encourage innovation in the market. Providing the tools (or guidelines) for developers to be creative in designing new or better ways of completing transactions is actively being encouraged. This was epitomised for me during the open track session presented by Mark Dearnley; HMRC’s Chief Digital and Information Officer. During his session; Mark provided an outline of the Government Digital Strategy, and how over the course of 2015, HMRC’s APIs will be made publically available for developers to ‘make things easy’. HMRC have no desire to control the market; preferring to adopt a natural selection process.

What this means is that if we take the example of Self-Assessment (SA); over the course of 2015; there may be a number of privately developed apps; across mobile platforms that attempt to make SA submission more efficient. During the course of consumerisation, only the best apps will survive. Natural selection will ensue, as app store ratings take effect. Only the most user friendly and easy to use apps will survive, thus reducing the need to control the marketplace. As the APIs are in the public domain; HMRC can control the integration.

CRM Strategies, Push Notifications and App Usage

The session hosted by Patrick Mareuil, the Chief Innovation Officer of Accengage; provided a very good overview of the brand loyalty of consumers with respect to app usage. Some highlights of the statistics shared, provide interesting reading:
-          20% of users access mobile apps only once
-          40% of users access the app between 1 and 3 times
-          40% access apps 11 times or more

These statistics are interesting, as looking at them at face value; we can see a lot of missed potential in terms of consumer engagement.

From a testing perspective, the overview of the way push notifications are used outlined a number of use case scenarios that companies; such as Sogeti can assist with productionising apps ready for general use.
Concentrating on the right message, at the right time, in the right place on the right channel is an important feature of maximising conversion of message. From a testing perspective, being able to replicate these scenarios; will provide customers with the right data to complement their digital marketing strategy. As with all approaches, there is a fine-line between optimising the interaction with customers and over-kill. Too much interaction and prompting the user, can also have a negative effect on a consumers willingness to buy.

In addition, the way in which this marketing activity interacts with other applications on the mobile device should be considered. If a user is playing level 105 of Candy Crush, and at that key moment of completing the level, a push notification interrupts their enjoyment, this again could cause negative feedback. Balancing the need for interacting and promoting offers with not interfering with the consumers day to day use of smartphones will need to be covered by the test scenarios that constitute the scope of a release.  Throw into the equation the different approaches to notifications across device platforms, and the scope of testing increases exponentially to ensure the maximum consistency of message across platforms to keep the user experience standard.

Proximity Beacons

A number of the tracks either showcased or made reference to the use of beacon technology as a means to delivering up to date messages and special offers to customers based on their location within a store or theme park.

The use of the technology does; in my opinion counteract the intrusive nature of the audience; as the consumer will be captive and in the right mindset to take on board the advertising messages. Some of the challenges that were outlined during the various talks centred upon the proximity challenges of the technology. In an expansive space; such as a theme park, there are unlikely to be beacons that interfere with each other’s transmission; however; how do companies ensure that in relatively small retail stores, the use of beacons is appropriate and displays the right message at the right time?

It was this example that outlined to me one of the key challenges to the testing of the beacon. How do you replicate on a large scale? If you set-up a test lab, with a selection of beacons; do you lose the desired proximity locations in the live; store environment? Is it sufficient to test using a small selection of beacons, to conduct interruption testing scenarios?

This is a very real consideration that companies need to consider when introducing new technologies to their digital marketing strategies.

Wearable Technology

References to the Internet of Things and wearable’s, brought some interesting viewpoints; but for me the best and perhaps not unsurprising summary of this area came from the session on “Privacy and the adoption of wearable technology”. In this session, the key message was that most; if not all policies around data security and protection apply to all devices. Securing the transport of message from back-end system through to ‘thing’ must follow the same policy and legislation.

For me; the same can be said in terms of the development of the ‘thing’ and also the testing of such devices. Validating the message transport, identifying weaknesses and vulnerabilities remains the challenge. Validating the display and user experience will require testing; developing Omni channel automation frameworks that maximise coverage, whilst controlling the amount of maintenance will appeal to companies as the industry matures. This is certainly a key area of development that I am overseeing in the Sogeti Studio.  In the coming months; we at Sogeti hope to be able to demonstrate these service innovations, to provide customers with an alternative approach to the current mode of operation.

The rise of Crowdsourcing – Testing device compatibility

Device fragmentation; specifically within the Android landscape, raises a number of challenges with regards the age old question of “How much testing is enough?”
Speaking to a number of companies; including app development agencies at the conference the challenges were very similar. How do you make sure that the apps released are compatible with the devices? Some of the answers were the use of emulators, to provide the breadth of coverage complimenting this with a top 10 physical device list to perform the depth of coverage. Others mentioned the reliance on crowdsourcing, booking slots to open up the scope of testing on real devices, seems to be becoming a popular supplementary approach to release testing.

When we add in to the mix operating system platforms and screen resolution, there needs to be a more robust way of achieving the right level of quality. Tools vendors need to look at ways of replicating the user interactions in a standard manner, to provide options in the marketplace.

All in all, the conference was very thought provoking, and has certainly provided a number of takeaways regarding how we at Sogeti can answer some of these challenges through the extension of the current offerings within the Sogeti Studio and developing models that can improve testing coverage on devices, through complimenting emulation with physical device testing and creating Omni channel automation frameworks that promote efficiency in the test cycles.

AUTHOR: Daryl Searle
Daryl is a Delivery Director at Sogeti UK, primarily working with financial services clients. He is both a mobile and an agile subject matter expert.

Posted in: API, Crowdsourcing, Developers, Innovation, Wearable technology      
Comments: 0
Tags: , , , , , , ,

 

Though many organizations consider the cloud to be a perimeter-less environment, strong security postures must be in place at certain entry points so that threats are disrupted and eradicated at an early stage. Cloud security is vital.Does the cloud have edges? We refer to the cloud as a perimeter-less environment, with workloads moving dynamically through various physical networks and regions. The cloud is interlinked in such a manner that there is no clearly defined edge to it. So what does it really mean to create cloud security at the edge?

To answer this question, let’s use an analogy. When the world was considered a flat landmass, humans thought it had a physical edge. Once we understood that Earth is a globe, the concept of that physical edge was no longer valid. However, from a logical point of view, the world’s landmass is divided into continents, countries, cities, neighborhoods, apartments, houses, etc. People can move around freely among these various locations. However, each area has its own rules of entry to ensure people traversing these locations are checked for positive intent and don’t have a negative impact at the location. Thus, a strong level of investigation at the port of entry becomes critical.

Similarly, the cloud environment is created by a number of networks coming together. The entry points to each of these enterprise networks become critical from a security point of view. Yes, this was always the case; however, the traditional approach to edge security doesn’t work in the cloud environment simply because the cloud requires much more flexibility in terms of allowing workloads to move around. Standard hierarchy-blocking of IP addresses or restricted entry only creates more false positives. In this new context, network security solutions that can carry out deeper inspections are vital to differentiate between a legal workload and a malicious one.

Tightening Cloud Security

A cloud security solution working at the edge needs the following capabilities:

  • Visibility: It is important to provide a detailed visibility to the security administrator on the kind of ingress and egress traffic that traverses the network, specifics of the URL categories visited and their IP reputations.
  • Control: Converting the visibility into relevant action is critical, so the enterprise should be able to define granular Web application policies. Also, blocking interactions with malicious URLs and allowing only the required business access to applications is important. This reduces the circumference for attacks against human vulnerabilities.
  • Protection: Large-bandwidth, deep packet inspection capabilities are required to efficiently handle traffic through the entry points. In a cloud environment, the intelligence behind identifying exploits is paramount. This is done to avoid false positives, thus providing the flexibility the cloud should provide and, more essentially, to gain intelligence to stop mutated and zero-day exploits since the network is directly exposed to the global threat landscape.
  • Multiple Traffic Type Inspection: The solution should be able to carry out the same level of inspection on encrypted traffic and traffic using varied protocols.

To protect your enterprise within the cloud, you need to create a strong security posture at the point of entry. This ensures that you disrupt threats at an early stage of their life cycle and that your enterprise’s cloud security strategy secures it from the edge.

The original post by IBM can be found here: Cloud Security: Protecting Your Enterprise at the Edge

IBMSecurityIntelligence AUTHOR: IBMSecurityIntelligence
Analysis and Insight for Information Security Professionals

Posted in: Business Intelligence, Cloud, Developers, Opinion, Security      
Comments: 0
Tags: , , , , , , , ,

 

Facebook-Messenger-LogoTechnology practitioners spend entire careers simplifying, be it removing complexity to make code more “elegant” or rationalising bloated application portfolios accumulated over time. The driving factors behind this push are self-explanatory: ease of maintenance, scalability, flexibility and cost. Is there ever a time in technology circles, however, when more is actually better?

Consumer mobile applications create great customer experiences by limiting purpose and features. In fact, when feature creep begins to make apps overly complex they are usually split apart, as in the case of Facebook and Facebook Messenger or Foursquare and Swarm. Creating multiple apps would seem counterintuitive to the enterprise IT department (“we should only have one ERP system, therefore we should only have one app”), yet how are the Facebooks of the world able to pull this off in a cost-effective manner?

The answer lies in digital architectures that are designed to be platform and device agnostic. These service layers are built with performance and scalability in mind, and can facilitate interactions from anything from a native mobile interface to a web browser. Elegance is critical at this layer of the architecture so that an easy to understand and use service catalog (API) can be provided to the teams building the great experiences. The service catalog manages the complexities of interacting with core operational systems, and the apps themselves remain none the wiser.

The consumerisation of IT has placed a premium on user experience, elevating it to new levels of focus and investment in the enterprise. As frightening as it may be for IT departments to see a sprawling landscape of apps spreading across their organisations like wildfire, more can actually be better with enterprise mobile apps. Does it really make sense for a human resource manager and a sales executive to have the same day-to-day mobile experience? If a digital enterprise focuses on architecture and governance it’s possible to stay true to the principles of technology elegance AND enable great mobile experiences for business users with very different needs. That’s the very definition of the power of digital delivered.

To read the original post and add comments, please visit the SogetiLabs blog: TOO MUCH IS NEVER ENOUGH

Related Posts:

  1. Objects Of Desire
  2. Don’t managers deserve beautiful apps?
  3. Does your cool app display your poor operational process?
  4. Digital marketeers : don’t underestimate mobile app projects

 

Joo Serk Lee AUTHOR: Joo Serk Lee
Joo Serk Lee is a Vice President in Sogeti USA who has spent the past two years architecting major programs at Aspen Marketing Services and at Abbott Laboratories. He in an Enterprise Architect by trade and has spent much of his 15 year career working in and crafting transformation programs featuring complex technologies across a wide range of technologies including Microsoft/Java stacks, mobility and large CRM and ERP solutions.

Posted in: Developers, Human Resources, mobile applications, mobile testing, Mobility, Open Innovation, Opinion, Technical Testing, Technology Outlook      
Comments: 0
Tags: , , , , , , , , ,