SOGETI UK BLOG

Building great apps and achieving high ratings is not easy. Developing apps require new skills in different fields of expertise as I have already covered in one of my earlier blog posts. But how can you improve apps that have already been pushed out to the stores? There are many ways to find improvements for your apps and analysing the feedback loop could be a starting point.

Collecting feedback is an important activity as it will help you prioritising on improvements in the roadmap and release planning. This feedback can also be used to support your business case for necessary changes and updates. And this is crucial to obtain budget for new releases and keep on innovating your app. There is not an unlimited source of money available so the development team should figure out where to focus on and how to report this to the respective business owners.

app-feedback-loop

It’s the product owner’s responsibility to eventually convert improvements into epics or stories and put those on the product backlog. Getting the right information out of the feedback loop is a continues activity in your app development process and crucial to maintain focus. Below are a few pointers that should help you getting the most out of the feedback loop. Feel free to share your thoughts with me on this topic. I would love to see your feedback.

Learn from other apps

There are plenty of existing apps out there and in most cases you will find similar apps in the same category that have very good ratings and a high adoption rate. You can use these as a benchmark for your apps. Most apps use common design patterns for i.e. registrations and integration with 3rd party apps and services. Next to that Google and Apple regularly promote apps with a successful implementation of platform specific design guidelines that app users are expecting.

When useful? Doing benchmark research doesn’t require much time and it is very useful to get inspired for improvement.

Any cautions? Manage expectations correctly towards the business owner and your team. To get to the point of 4/5 star ratings, app developers have been failing, learning and improving for a longer period of time. Prepare yourself and the team to release and improve in iterations instead of aiming for a 4 star app right from the beginning.

Learn from feedback in the app store

The most accessible feedback would be the user reviews in the app store (which is readable by everyone).

When useful? If you have a very active user population they will rate your app and sometimes write reviews.

Any cautions? By only looking at the ratings (whether those are low or high ratings) you could miss essentials details on why your app is good or bad. Collecting app store reviews might not give you the complete picture. Apple (currently) does not offer options in iTunes Connect to respond on feedback from the Appstore (the Google Playstore does) so you will need to think of other ways (see below) of interaction. For all published apps you can use the release notes section of the app store to announce new features and improvements based on user feedback.

bad-review

Implement in-app feedback options

Offer the user a feedback option inside your app. The most simple way is to trigger an email with pre filled text (variables) from your app like high level information about the user, the screen name and even technical information i.e. about the app version and device used. Of course there are more advanced ways to embed feedback forms in your app but triggering an email is easier to implement and most users have at least one email app configured on their device.

When useful?  In-app feedback options should be the most accessible feedback option for the app user. Submitting feedback should not require more than 2 or 3 taps. Because there are options to add contextual information from the app to the feedback more information can be obtained.

Any cautions? Before adding in-app feedback options discuss with your design team on how to add this option in a discrete way preventing any annoyances (i.e. if it uses an external email app for sending feedback the user should be informed about leaving the app). For the simple email implementation you need your support admins to create a shared mailbox to receive the feedback. Make sure that your product support team is moderating the email questions. The product owner should have access to the shared mail box to collect feedback and use it for reporting if necessary.

Learn from appstore analytics data

Analytics data for your app is available through your app store account in the analytics/statistics section.

When useful? General app performance data and metrics can be obtained from the app stores but is mostly limited to high level information such as app store views, app downloads, app sales/revenue and (high level) crash reporting. If this is good enough for your reporting you are good to go and use this data.

Any cautions? Again there is a big difference between Apple and Google here. If you need to get insights on platform specific analytics for i.e. your Android apps you can get a tons of information from the Google Playstore. The same is more or less valid for iOS apps in Apple iTunes Connect with the big difference that Apple is collecting data based on opt-ins. This means that only some users, with data sharing switched on, are contributing to detailed app store analytics. When you need to get more insights on the performance of your apps (independent of the platform used) switching to a more detailed analytics tool could be necessary (see next item).

opt-in-ios (1)Learn from in-app metrics

In this case, metrics are collected in analytics tools such as i.e the Adobe, Google Analytics or other solutions. Most of the analytics tool require embedding of an SDK component in your app which have to be available cross-platform. With this you are able to receive out of the box analytics data (like available in the App Store ) but on top of that you can manually add events and goals in your app that generate specific in-app metrics.

When useful? A big difference from the standard analytics data out of the App Store is that there are technically no opt-in limitations as described before. You have much more flexibility on getting cross-platform metrics all well as detailed and specific in-app metrics per platform type. Examples are screen hits, button presses and funnels for conversion data reports. Collecting in-app metrics requires more efforts but is great for getting more in-depth insights on how your app is performing.

Any cautions? Analytics tools could be cumbersome to setup. Next to defining a clear set of requirements you need specific skills to setup the dashboards, custom goals and metrics. If integration with other data sources is required you might need to get help in extracting the right data for your reporting. Make sure that you plan a session with the business owners about the goals and metrics they would like to measure (ideally the business goals have already been set and measurable for your app).

Data collection should be limited to anonymized data and only be used for optimizing and improving. Before implementing in-app analytics make sure that you discuss with your legal department on the data that you can or cannot collect and how to inform the user about data collection in your app. Due to privacy rules, storing personal identifiable information in analytics clouds is not allowed in a lot of cases. Special agreements can be made to switch off certain data collection options on the vendor side (such as measuring the source ip). Your test/audit team can validate this before pushing the app live.

funnels (1)

Do beta testing.

60% of the most popular apps on the Google Playstore run beta programs according to Google. The nice thing about the Google Playstore and Apple Appstore that both can support you in running a beta program for a limited group of testers that you select. Most of your users already have an App Store account which makes beta testing through the App Store very accessible (no additional registration work is needed).

When useful? Beta testing offers you a way to test prereleases of your app in a controlled fashion. Apps can be installed on lots of different devices with different form factors. Although there are ways to do automated UI testing in the cloud, beta testing allows you to get feedback from a relatively big group of real users on usability, performance or even crashes. And this is valuable information as users can share their feedback one-to-one with the developers. Involving users in beta testing allows you to get feedback at first hand and tackle the the bigger errors before pushing your app to a bigger crowd. This could save you some nasty reviews in the app store.

Any cautions? Before running beta programs make sure that you have your release management processes and tooling in place. Your team should be able to deliver automated builds of your app, signed with the right certificates and stored with logical version number in the file name and app properties. This allows the testers to install, give feedback and report on bugs linked to specific version of the app. A tool like i.e. Hockeyapp allows setting continues deployments for your app but requires beta testers of iOS apps to register their device UDIDs separately. Running a beta program through Apple’s TestFlight is more user friendly since the TestFlight app makes installing beta apps easier, with no need to keep track of UDIDs or provisioning profiles.

If your app launch date is time critical you should consider to start with internal beta testing as soon as you have a minimal viable version of your app ready. Once your app is complete enough and useful for a bigger crowd you can offer it through a public beta test. During the beta testing period you can monitor both the app and the backend performance by checking the (in-app) analytics metrics and the server side performance logging.

Announce your public beta tests through different channels. The Google Playstore offers a promotion banner and button for users to opt in. Use channels like your website, social media, email or even print (QR codes) to promote a beta program for your target audience. Give your team enough time to collect feedback and prepare improvements in several releases for your app.

beta-opt-in (1)

Do usability testing

If certain features are not yet released and still in a design/concepting phase important feedback can be gathered from usability tests in a (closed) lab environment. Professional usability labs offer equipment such as test devices, eye tracker camera’s and video recording tools to support with the usability test.

When useful? Organising a usability test makes sense when your app is getting a design overhaul or if you are introducing new functionality that requires multiple iterations of design work. Usability tests can be done with clickable prototypes or fully functional apps. If you don’t feel like burning all your money on development, there are lots of tools available to create clickable prototypes that can run in full screen mode so that the tester (to some extent) can experience the look and feel of real app.

Any cautions? To get the most out of your usability tests make sure that you reserve enough time for the preparations. Preparation time is necessary for defining and writing the test scripts but also setting up demo accounts and test devices to support your tests. Because a clickable prototype is not behaving the same as a fully functional app, prevent any complicated tasks for the usability testers. When developing a clickable prototype the focus should be on designing, testing and implementing the ideal flow and less on testing features.

Depending on the type of app make sure that you invite the right testers. You might need to define persona’s first to invite testers with a certain profile and testers who really represent your user base.

ui-x-lab (1)

Thomas Wesseling AUTHOR:
Thomas Wesseling is a Manager Consultant in the Sogeti Global Mobility Practice. He is the initiator of the Sogeti Global Mobility Practice community on Sogeti Teampark that has grown to 500+ members over the last four years.

Posted in: beta testing, Big data, Business Intelligence, Cloud, communication, Developers, Digital strategy, Human Interaction Testing, Innovation, mobile applications, Uncategorized, User Experience      
Comments: 0
Tags: , , , , , , , , , ,

 

Building great apps and achieving high ratings isn’t easy. Developing apps require new skills in different fields of expertise as I have already explained in one of my earlier blog posts. But how can you improve once your apps have been pushed out to the stores? There are many ways to find improvements for your apps and analyzing your feedback loop could be good starting point.

Collecting feedback is an important activity as it will help you prioritizing on app improvements in your product roadmap and release planning. This feedback can be used to support your business case for necessary changes and updates. And this is crucial to obtain budget for new releases and keep on innovating your app. There is not an unlimited source of money available so the development team should figure out where to focus on and how to report this to the respective business owners.

app-feedback-loop

The product owner will eventually convert improvements into epics or stories on the backlog for the development team. Once prioritized both the product owner and scrum master can start working on a clear set of requirements for the developers and add the stories to the sprint backlog. Getting the right information out of your feedback loop is a continues activity in your app development process and crucial to maintain focus. Below are a few pointers that should help you getting the most out of your feedback loop.

1. Learn from other apps

There are plenty of existing apps out there and in most cases you will find similar apps in the same category that have very good ratings and a high adoption rate. You can use these as a benchmark for your apps. Most apps use common design patterns for i.e. registrations, integration with 3rd party apps and services. Next to that Google and Apple regularly promote apps with a successful implementation of platform specific design guidelines that app users are familiar with.

When useful?
Doing benchmark research doesn’t require much time and it is very useful to get inspired for improvement.

Any cautions?
Manage expectations correctly towards the business owner and your team. To get to the point of 4/5 star ratings, app developers have been failing, learning and improving for a longer period of time. Prepare yourself and the team to release and improve in iterations instead of aiming for a 4 star app right from the beginning.

2. Learn from feedback in the app store

The most accessible feedback would be the user reviews in the app store (readable by everyone).

When usefull?
If you have a very active user population they will rate your app and sometimes write reviews.

Any cautions?
By only looking at the ratings (whether those are low or high ratings) you could miss essentials details on why your app is good or bad. Collecting app store reviews might not give you the complete picture. Apple currently does not offer options in iTunes Connect to respond on feedback from the Appstore (the Google Playstore does) so you will need to think of other ways (see below) of interaction. For all published apps you can use the release notes section of the app store to announce new features and improvements based on user feedback.

bad-review

3. Implement in-app feedback options

Offer the user a feedback option inside your app. The most simple way is to trigger an email with pre filled text (variables) from your app like high level information about the user, the screen flow and even technical information about i.e. the app version and device used. Of course there are more advanced ways to embed feedback forms in your app but triggering an email is easier to implement and most users have at least one email app configured on their device.

When useful?
In-app feedback options should be the most accessible feedback option for the app user. Submitting feedback should not require more than 2 or 3 taps. Because there are options to add variables from the app in the feedback template the feedback is more contextual.

Any cautions?
Before adding in-app feedback options discuss with your design team on how to add this option in a discrete way preventing any annoyances (i.e. if it uses an external email app for sending feedback the user should be informed about leaving the app). For the simple email implementation you need your support admins to create a shared mailbox to receive the feedback. Make sure that your product support team is moderating the email questions. The product owner should have access to the shared mail box to collect feedback and use it for reporting if necessary.

4. Learn from appstore analytics data

Analytics data for your app is available for everyone who has been publishing an app to the stores by simply logging on to your app store account and access the analytics/statistics section.

When useful?
General app performance data and metrics can be obtained from the app stores but is mostly limited to high level information such as app store views, app downloads, app sales and (high level) crash reporting. If this is good enough for your reporting you are good to go and use this data.

Any cautions?
Again there is a big difference between Apple and Google here. If you need to get insights on platform specific analytics for i.e. your Android apps you can get a tons of information from the Google Playstore. The same is more or less valid for iOS apps in Apple iTunes Connect with the big difference that Apple is collecting data based on opt-ins. This means that only some users, with data sharing switched on, are contributing to detailed app store analytics. When you need to get more insights on the performance of your apps (independent of the platform used) switching to a more detailed analytics tool could be necessary (see next item #5).
opt-in-ios

5. Learn from in-app metrics

In this case metrics are collected in analytics tools such as i.e the Adobe, Google Analytics or other solutions. Most of the analytics tool require embedding of an SDK component in your app which have to be available cross platform. With this your are able to receive out of the box analytics data (like available in the App Store ) but on top of that you can manually add events in your app that generate specific in-app metrics.

When useful?
A big difference from the standard analytics data out of the App Store is that there are technically no opt-in limitations as described before. You have much more flexibility on getting cross platform metrics all well as detailed and specific in-app metrics per platform type. Examples are screen hits, button presses and funnels for conversion data reports. Collecting in-app metrics requires more efforts but is great for getting more in-depth insights on how your app is performing.

Any cautions?
Data collection should only be used for optimizing and improving apps and limited to anonymized data. Before implementing in-app analytics make sure that you discuss with your legal department on the data that you can or cannot collect and how to inform the user about data collection in your app. Due to privacy rules, storing personal identifiable information in analytics clouds is not allowed in a lot of cases. Special agreements can be made to switch off certain data collection options on the vendor side (such as measuring the source ip). Your test/audit team can validate this before pushing the app live.

funnels

6. Do beta testing.

60% of the most popular apps on the Google Playstore run beta programs according to Google. The nice thing about the Google Playstore and Apple Appstore that both can support you in running a beta program for a limited group of testers that you select. Most of your users already have an App Store account which makes beta testing through the App Store very accessible (no additional registration work is needed).

When useful?
Beta testing offers you a way to test prereleases of your app in a controlled fashion. Apps can be installed on lots of different devices with different form factors. Although there are ways to do automated UI testing in the cloud, beta testing allows you to get feedback from a relatively big group of real users on usability, performance matters or even crashes. And this is valuable information for the development team as users can share their feedback one-to-one with the developers. Involving users in beta testing allows you to get feedback at first hand and tackle the the bigger errors before pushing your app to a bigger crowd. This could save you some nasty reviews in the app store.

Any cautions?
Before running beta programs make sure that you have release management processes and tooling in place. Your team should be able to deliver automated builds of your app, signed with the right certificates and stored with logical version number in the file name and app properties. This allows the testers to give feedback and report bugs linked to specific version of the app. A tool like  Hockeyapp allows you to setup continues deployments for your app but requires beta testers of iOS apps to register their device UDIDs separately. Running a beta program through Apple’s TestFlight is more user friendly since the TestFlight app makes installing beta apps simple, with no need to keep track of UDIDs or provisioning profiles.

If your app launch date is time critical you could consider internal beta testing as soon as you have a minimal viable version of your app. Once your app is complete enough to be useful for a bigger crowd you can scale up to a public beta test. During beta testing period you can monitor both the app and the backend performance by checking the (in-app) analytics and the server side logging.

Announce your beta tests through different channels. The Google Playstore offers a promotion banner and button for users to opt in. Use your website or other channels like mailings or print (QR codes) to promote a beta program for your target audience. Give yourself enough time to collect feedback and prepare improvements in several releases for your app.

beta-opt-in

7. Do usability testing
If certain features are not yet released and still in a design/concepting phase important feedback can be gathered from usability tests in a (closed) lab environment or just by interviewing users in public. Professional usability labs have equipment such as test devices, eye tracker camera’s and video recording tools.

When useful?
Organizing a usability test makes sense when your app is getting a design overhaul or if you are introducing large extensions that requires multiple iterations of design work. Usability tests can be done with clickable prototypes or fully functional apps. If you don’t feel like burning all your money on development, there are lots of tools available to create clickable prototypes that can run in full screen mode so that the tester (to some extent) can experience the look and feel of real app.

Any cautions?
To get the most out of your usability tests make sure that you reserve enough time for the preparations. This means that if demo accounts and devices are available and have been setup correctly to support your test scripts. Because a clickable prototype is not behaving the same as a fully functional app prevent any complicated tasks for the usability testers. When developing a clickable prototype the most of the work is in designing and implementing the flow.
Depending on the type of app and uses cases make sure that you invite the right testers. You might need to define persona’s first so that you will invite testers that represent your user base.

ui-x-lab

Thomas Wesseling AUTHOR:
Thomas Wesseling is a Manager Consultant in the Sogeti Global Mobility Practice. He is the initiator of the Sogeti Global Mobility Practice community on Sogeti Teampark that has grown to 500+ members over the last four years.

Posted in: Application Lifecycle Management, beta testing, Business Intelligence, Cloud, Data structure, Developers, Digital strategy, Human Interaction Testing, Innovation, IT strategy, Outsourced testing, Quality Assurance, Research, Scrum, Test Tools, Testing and innovation, Transformation, User Experience, User Interface      
Comments: 0
Tags: , , , , , , , , , , , , , , , , , , , ,

 

touch-customer-heart-sogeti-mobileInteractive marketing teams or departments play an important role in the company’s digital strategy. Or at least they should.

Every QR code that is being printed, website being built, mobile app being delivered and content published will pass by this department before a customer sees it. Online media has become a vital asset for engaging today’s customers but it is also costly to manage especially when external parties like digital agencies and IT developers are involved in the creation process.

This is why interactive marketing departments are more often insourcing design teams, interactive media experts and IT developers for the construction of websites, mobile apps and other digital assets.

Digital marketeers who have worked with web development teams have learned about the different concepts around Content Management, Search Engine Optimization (SEO), Customer Relationship Management (CRM), Website Analytics and (Online) Focus Groups.

With other channels like mobile added to the mix, a different approach is needed. Creating mobile Apps requires new skills in different fields of expertise and this should not be underestimated.

Below are a few take aways for people who are working in interactive marketing teams and manage mobile app projects.

1. Interactive Marketing departments that have been formed during the internet revolution have people working with a history in managing web site projects who don’t necessarily have experience with mobile app projects. Teams can underestimate the size and impact of app projects as such projects could become bigger and more complex to manage compared to the average website projects that they used to manage in the past.

2. Look at the total (cross channel) experience. Is the mobile app part of an existing experience (on i.e. a website)? Make sure that the App can be integrated with that experience to prevent confusion or frustrations with (existing) online customers.

3. A clear scope should be defined by the product owner or else apps will turn into Swiss Army knives : program manager/product owners  should keep to the rule “less is more” when it comes to defining scope for the design and development teams.

4. User eXperience (UX) Design should be done by people with a good view on mobile UX.

5. User Acceptance Testing should be done by people with experience in mobile UX but also potential end users before launching the app. If the budget allows, prototypes should be tested first before jumping into any development work.

6. Apps are often implemented as mash-ups and therefore require integration with several backend systems/APIs. Creating these APIs will heavily rely on IT development and solution architecture skills.

7. Leaving App development and design to a pure IT player that does not have experienced design resources on board is risky. If the development team does not have those skills, a design team needs to feed them with proper design assets. Managing requirements and delivering proper (UX) design assets are a must.

8. App development teams should reserve additional time for System and Integration Testing : especially when the App becomes more complicated (i.e. stores data offline and/or is part of a cross channel experience) and is developed for several mobile platforms.

9. Not everyone is aware of how mobile app eco systems work, how apps are reviewed and in-app analytics can be collected. Following up on user reviews and improving apps based on facts gathered through mobile analytics are important.

10. When implementing in-app analytics : clear goals will have to be defined by the team. Next to measuring technical defects (like crashes and other bugs) measuring conversions from the mobile app are important. But do respect the user’s privacy.

Thomas Wesseling AUTHOR:
Thomas Wesseling is a Manager Consultant in the Sogeti Global Mobility Practice. He is the initiator of the Sogeti Global Mobility Practice community on Sogeti Teampark that has grown to 500+ members over the last four years.

Posted in: acceptance tests, Apps, Developers, Digital, Digital strategy, Internet of Things, mobile applications, mobile testing, Mobility, Omnichannel, Software Development, SogetiLabs, User Experience      
Comments: 0
Tags: , , , , , , , , , , , , , , , , , , , , , , , , , ,