focus-and-performance[1]Lately, I’ve been asked a lot of questions by managers and teams alike about the organisational need to measure performance of Scrum teams. Of course, most people refer to the Development team within the Scrum team… but that is where Scrum creates confusion.

The Product Owner and the Scrum Master, together with the Development team, form the Scrum team. The idea is to include all the roles in a nice package called Scrum. The Development team, however, is often called the ‘Team,’ because that is how the Product Owner and the Scrum Master addresses people who create the product.

Many of us realise that we don’t measure the time spent on the individual tasks within the Development team. We work with fancy units such as feature points, or complexity points, or story points (when using User Stories). The unit, however, does not matter as long as you realise it’s not the time spent. It is a relative value obtained by comparing the effort and complexity or a chunk of work with that of the rest of the work to be done.

What many do not realise is that these points we use are NOT productivity metrics. In fact, they are there only to aid the Product Owner and the Development team in their work. Reporting to the management about the amount of points burned down in a Sprint, only serves as an effort to ensure transparency (with the management), which is characteristic of Scrum.

The expectation that these points are metrics for productivity and that we can use them to push the team to a higher performance level is like measuring a company’s performance using the amount of people working for them. Why would we measure performance of a Development team based on velocity increase, function point analysis or the amount of coffee they drink, when (in reality) a company’s performance (usually) gets measured on the basis of the Return of Capital Employed, profit, turnover, etc.?

The main goal of a Scrum team is to increase the Return on Investment (ROI); the people who make this possible are the Scrum team’s developers. Pushing developers to the edge, by demanding a higher volume of output, leads to the following undesirable outcomes:

  1. 1. The team gets demotivated and/or overworked and the productivity decreases.
  2. 2. The number of errors the team makes increases; more bugs in the system results in more problems in operation.
  3. 3. The team cuts corners to get more work done under pressure and, consequently, technical debt increases, which in turn bogs the team down over time, while they refactor the code bit by bit.
  4. 4. It creates a rift between the Product Owner and the Development team. The Scrum team, as a whole, works to create value and increase ROI, but the Product Owner is responsible for the ROI; the Development team is responsible for the work done.
  5. 5. This anti-Agile behavior reduces the effectiveness of the Scrum framework, which is best when the Development team is empowered. In fact, self governing can achieve a sustainable pace.

The job of a Scrum Master is to help the team become better by facilitating their work and removing anything that keeps the team from going at the pace they want to. I personally believe that people in the Scrum team and the Development team WANT to work at a cutting-edge pace while creating the best possible product. Interfering with this will sabotage their dynamics and will achieve the opposite of what you force them to do.

Focus on measuring ROI of the product and you will be happier.

To read the original post and add comments, please visit the SogetiLabs blog: Measuring performance in Scrum teams is an act of sabotage!

Related Posts:

  1. Scrum world domination
  2. Say NO
  3. 5 ways to get your Scrumteam gimped
  4. Will project managers survive the agile trend?

Julya van Berkel AUTHOR: Julya van Berkel
Julya van Berkel is an Agile adept and coach and has been championing Agile and Scrum since 2007. In her role as Agile Coach Julya has helped numerous clients and colleagues in getting better at Agile and as teaches she has set up and taught hundreds of Agile and Scrum training and courses. For Sogeti in the Netherlands she helps set the direction in Agile and is involved in many groups within Sogeti and outside in the Agile community.

Posted in: Behaviour Driven Development, Developers, Opinion, Scrum, Software Development, test framework, Testing and innovation      
Comments: 0
Tags: , , , , ,


The software that was created in the beginning of the computing era wasn’t really structured and built under architecture. There were no guiding principles and everything was very tightly integrated. Data, logic, interaction and view – all were running on one machine. It’s, typically, the Classic Spaghetti recipe that we see a lot from the 1st platform of Mainframe and Terminal.

Spaghetti architecture pic 1

From the Spaghetti ‘un-architecture,’ as we would call it nowadays, we went to the layered n-tier architecture. This matched the 2nd platform of Client and Server very well and was called Lasagna, with reference and respect to the first paradigm. A nice separation between persistent storage centrally on the server, with some business logic on top, was interacting with the client view and manipulation. It’s a clear separation of functionalities and focus, but still often too tightly integrated for real remixing, cross channel.

Spaghetti architecture pic 2

Now we enter(ed) a new paradigm of interconnectivity, often referred to by Cognizant as SMAC (Social, Mobile, Analytics, Cloud). Things (as in the Internet of Things) made it SMACT, according to our own Sogeti VINT Labs, and Gartner calls it: “Nexus of Forces.” This is a 3rd platform (as IDC calls it) of interconnectivity where we are not just meshing up some data to a view. We are mixing and matching any data and all kinds of logic from any source that could be used and remixed towards any touch point like mobile, web and desktop; and even further to other physical displays like car, TV and billboards. But what is the architectural principle that we should start with to know if anything can connect to anything?

Spaghetti architecture pic 3

About a year or two ago, I stumbled upon Node.js, which triggered me on to two different aspects. For one, it was amazing to see how far JavaScript stretched to become an object oriented language for both client and server. But more relevant for this post was the name of this server sided scripting – Node. A Node that was, and is, focused around an essential functionality. Limited, but very valuable.

Being small and limited by focus and functionality, it is somewhat like how the App is, in relation to the more classical and typically bigger Applications. The Node is the tiny source or small interconnector in bigger and more complex networks. Those networks are built like a graph with Nodes and Links. Such networks can even have hierarchy by including other networks, as if those sub-networks were nodes themselves.

Spaghetti architecture pic 4

Networks like these are not tightly integrated and are not stacked in layers. Networks like these are loosely coupled by clearly described interfaces, called Application Programmer Interfaces (APIs), which are remixed into solutions by third parties. These include all kinds of solutions that the creator of the node didn’t even have to think about … APIs to build anything. API is the new App, so to say… quite literally, in this case.
The 3rd platform is this mesh; it is this nested and interconnected graph with sub-graphs where everything is coupled, but only loosely and based on clear interfaces. Now think of the links as spaghetti and of the nodes as meat. Are we back where we started? Or, are APIs enough to keep a clear overview and nice separation that we could even start to call architecture?

Spaghetti architecture pic 5

The key is to keep track of versions and dependencies, something we tend to forget in our current landscape as well. Making meshes of nodes requires a good insight into the dependencies and usage of services. This is needed far beyond your own organisation’s borders. Insight is necessary in all services across organisations.

Spaghetti architecture pic 6

True transparency and solid governance in an open, sharable and searchable format is mandatory for this to be valid as maintainable, stable mesh architecture. This can only be successful with an open standard; and that will take time to evolve and become adopted widely enough to become the foundation of the 3rd platform design guide and prevent the ‘un-architecture’ of the spaghetti of the 1stwave.

To read the original post and add comments, please visit the SogetiLabs blog: Revenge of the Spaghetti Monster

Related Posts:

  1. Omnichannel Services Should Deliver Functionality, Not Only Data
  2. Open APIs will Drive Innovation in a Mobile World
  3. It’s the platform, stupid
  4. Get started on building Apps

Arnd Brugman AUTHOR: Arnd Brugman
As an innovator Arnd helps organisations with innovative project delivery - from the initial idea and inspiration through to the vision, strategy and roadmap all the way through to assissting with proof of concepts and pilots. He has significant experience with innovation, product development and service delivery.

Posted in: API, architecture, communication, SMAC, Technology Outlook, Transformation      
Comments: 0
Tags: , , , , , , ,


Vulcan saluteAny application that is critical to running the mainstream operation of a business can be considered a mission critical application. These applications should be sustainable, maintainable, flexible and adaptable to keep up with the pace of today’s evolving nature of business. “Change is the essential process of all existence.” — As aptly stated by Mr. Spock in Star Trek: The Original Series.

As architects or developers involved in these types of software projects, it’s important to realise that software design and development often ends up following one of these two extremes: over-engineering or under-engineering.

Over the years, I’ve designed, developed and reviewed many software projects and have always found areas or specific pieces where the software was either too complex or such, where simplicity, bad habits, business rush or developer turnover had led to a messy situation.

If you can’t explain your code to someone else in a minute or two, then you have made the code too complex.

So, how can we avoid such problems? Here are four principles that can help address these issues:


The Keep It Simple Stupid (KISS) principle states that most systems work best if they are kept simple.

Unnecessary complexity should be avoided.

You should try to solve the problem in the simplest way you can. Ask yourself or your team: Do we really need to build the USS Enterprise or can a helicopter do the job?


Don’t Repeat Yourself (DRY) is a software development principle, aimed at reducing repetition of information of all kinds.

Many of the projects I review, have repeated codes throughout the program. When a bug fix, update or new feature is needed, developers are forced to search the entire code base to find any occurrence and make the required changes.

Therefore, a software, designed without the principle of DRY in mind, is harder to maintain and less flexible; which in turn increases the time to market.


You Aren’t Going To Need It (YAGNI) is a principle of extreme programming (XP), which states that a programmer should not add functionality until deemed necessary.

Write the minimum code necessary to solve the problem.

Start out by developing software that does just what is needed and nothing more; but make sure that the code is extensible, so that adding onto it does not become a problem.


SOLID is a mnemonic acronym for the following five basic principles of object-oriented programming and design: Single responsibility, Open-closed, Liskov substitution, Interface segregation and Dependency inversion.

A software designed and developed with these five principles, applied together, should be easier to maintain and sustain over time.

Depend upon Abstractions. Do not depend upon concretions.

To wrap up, keep your code DRY, design and develop only what is needed (YAGNI), embrace a KISS state of mind, work with rock SOLID principles and your mission-critical business applications will Live Long and Prosper.

“Live Long and Prosper.” — Spock, Star Trek: The Original Series

To read the original post and add comments, please visit the SogetiLabs blog: Live Long and Prosper: Apply Four Simple Programming Principles

Related Posts:

  1. Top 10 post: “Code generation… Can it be simple and pragmatic?”
  2. Code generation… Can it be simple and pragmatic?
  3. The service is dead, long live the service!
  4. To Live Long and Prosper

Carlos Mendible AUTHOR: Carlos Mendible
Carlos Mendible has been .Net Architect for Sogeti Spain since 2012. He is Head Solutions Architect in, one of our major clients, A3 Media, and is also responsible for assisting with the sales process as Pre-Sales Engineer and for conducting workshops and training sessions.

Posted in: Application Lifecycle Management, architecture, Behaviour Driven Development, Developers, IT strategy, Opinion, Test Driven Development      
Comments: 0
Tags: , , , , , , ,


Sogeti is proud to offer web and mobile testing services through our uk based lab, Sogeti Studio, to support your digital or omni-channel strategy.

As part of this we are committed to researching the current trends in desktop, tablet and mobile devices as well as the operating systems and browsers that run on them. This allows us to ensure we can thoroughly test across the most popular types in each category, whilst also offering the ability to test across more niche types to deliver the test coverage our customers need.

Each month we prepare a report on the current omni-channel market trends and suggestions for testing, covering:

- Desktop Browsers
- Desktop Screen Resolutions
- Desktop Operating Systems
- Mobile/Tablet Browsers
- Mobile/Tablet Operating Systems
- Mobile/Tablet Service Provider (Device Manufacturer)

Find the February 2015 report here:

Key figures from February include:

- Internet Explorer has seen it’s market share dip since January, both globally and in the UK.
- Windows operating systems continue to dominate the market, with 8.1 gaining share from Windows 7.
- Clients starting to mention Chrome more as a requirement for mobile/tablet testing.
- Android and iOS mobile operating systems have a combined  market share of around 80%.
- Studio is monitoring Windows 10 and Project Spartan; look out for a specific blog about this soon.
- Apple remains the dominant device manufacturer, closely followed by Samsung, then Nokia.

For more information about Sogeti Studio, please visit:

Find the January 2015 report here:

AUTHOR: Sogeti Studio
Sogeti Studio is our London-based web and mobile testing lab.

Posted in: Apps, Digital strategy, Marketing, mobile applications, mobile testing, Mobility, Omnichannel, SMAC, Sogeti Studio, User Experience      
Comments: 0
Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,


The decentralised model for the creation and control of the crypto-currencies means that they are disruptive influence on traditional currencies as they are not easily subject to central control. Their alignment to the rapid growth of the consumer use of electronic purchase and electronic wallets, as well as their basis in the core principles of the Internet, means that they are likely to have a significant impact on our existing currency models.

As we increasingly operate via electronic and mobile banking, money has moved from being ‘a thing’ we hold, to numbers we move around. In such an environment, it is inevitable that new forms of currency will emerge to challenge the historic norms. Crypto-currencies, such as Bitcoin, take this a step further by using a model which reflects the underlying structure of the Internet and World Wide Web, namely a distributed model where the software and operating model becomes self-sustaining. So in such an environment, we believe it is more a matter of ‘when’ rather than ‘whether’ such models will impact our existing currency models.

The move from paper money and metal coins to electronic wallets is underway.
As a personal experiment, I try when travelling not to carry money and see if I can do everything electronically. In the Nordics, it has now been the case for many years that anything can be bought with a card, no matter what the value. My observation is that the other ‘advanced countries, such as the UK, are behind the curve, as in the UK cash still plays a large part in small transactions.

The limitation of the cashless model, as I discovered on one recent trip, is that the digital exchange can occasionally be vulnerable to lack of Internet access. This need for access illustrates the limitation of centralised, electronic money models. In principle, it should be possible to overcome this through the use of cards that guarantee a minimum value. But there appears to be a trend that all transactions try to make a remote call irrespective of the value.

If we replace bank cards with electronic wallets on phones then this need for access, can in principle be overcome as only a local connection is required. The wallet can then maintain a running balance against the value lodged the last time a connection was made to the account. If electronic wallets become the focus, then this creates the need for encrypted, electronic currency managed on mobile phones so that we have the same portability as cash. (1)

Thus, independent of the emergence of crypto-currencies, we can see the trend towards electronic or digital money and the creation of electronic wallets for holding that digital money.

What is money anyway?
If we recognise that money is already becoming electronic then we can focus on who controls the currency and defines its value for exchange of goods and services. At a regional or country level, currencies are issued and controlled by Governments and their central banks. The value of the currency is then determined by wider economic issues and parameters (such as interest rates) and the money supply. Money supply has moved from a model of ‘printing money’ to one of quantitative easing. Money supply, and its impact on the value of the currency, is again determined by electronic transfer of funds rather than a physical manifestation of the currency.

So currency in this circumstance is merely a representation of a value, the currency having no intrinsic value itself. In fact, the value is guaranteed by the central bank, which then holds reserves of precious metal, although the relationship between the currency and the reserve is no longer fixed.

We can compare this with the historic concept of coinage, when coins were minted from metals which had their own intrinsic value. This led to the historic picture we have of people ‘biting’ coins to test the metal type. So in the historic physical world, coinage was more than a currency; it was an asset in its own right. The value of the coinage was related both to the value of goods it could buy, as well as to the inherent value of the metal it was made of.

Are crypto-currencies old money or new money?
To create a Bitcoin, the electronic coin has to be mined by processing a hashing algorithm. The complexity and energy required for the calculation is increasing over time, with the result that serious compute power and energy is required to mine the algorithms. Discussing this with a number of Bitcoin miners, it would seem that even in its early years, the processing needed moved quickly from a PC to requiring the processing power and energy management of a data center rack to create the coins. Now faster chips and lower energy boards make this easier but at the same time the processing algorithm is becoming more challenging. As such current miners consider it a struggle to match the cost of the processing to do the mining with the value of the Bitcoin created.

So we see that the mining is itself a high cost activity and as such there is an inherent cost and potentially value in the item created. So, digital assets are being created as a result of Bitcoin mining. This view was confirmed recently in a tax judgment (12) where Bitcoins were seen as property for tax purposes and are subject to Capital Gains Tax in some regions.

The original design of the algorithm ensures the scarcity of the Bitcoin commodity increases over time and that there is ultimately a limited supply, as with physical mining of precious metals. The difference is that the Bitcoin limit is well defined at 25M (11), which means that the total value of the currency is likely to be in the Billions of USD. However, building on the Bitcoin model many other Crypto-currencies are emerging (5), which could have the potential to create much larger pools of currency and challenge existing, centralised money supply models.

Given this analogy we can then see that the Bitcoin is more like the old forms of coinage made from precious metals than current Government secured currencies. Similar to precious metals, however, there is little relationship between the functional value of a Bitcoin and its trading value.

The distributed operating model for the Bitcoin and crypto-currencies
Unlike other currencies, there is no central governance for the creation and management of coinage; rather it is determined by the algorithm and the maintenance of the single view of the transactions and available money supply via the blockchain in the Bitcoin case. The blockchain is community managed and determines, via an algorithm, the mechanisms for mining the coins, which in itself changes over time to set the requirements for the mining process.

This means that the supply of available coins (aka the money supply) cannot easily be manipulated by a single individual (or Government). This creates a decentralised, community based control mechanism, which operates at the boundary between the physical and virtual world. It does however rely on an element of self-regulation and monitoring and any digital system can be subject to fraud given sufficient concerted effort.

This decentralised, virtual model is reflective of and enabled by the inherently decentralised nature of the Internet. However, we can also see a parallel in that the Internet started out as a purely community based  tool, but which now, as its importance has become critical, is subject to the attention of governments and large corporations. So whilst Crypto-currencies have started as purely community based services, as they become significant and mainstream they will come into the same focus from governments and large corporations, particularly financial institutions.

In fact this is already happening as from day one there has been value ascribed to Crypto-currencies and as such they draw immediate attention from tax authorities, who need to understand its relationship to each country’s trading and tax regime. (2) (3)

Crypto-currencies as an alternative neutral currency for globalisation of services
Even in countries in the developing world, where currency can be unstable, access to the Internet via mobile phones and other devices is still growing rapidly, creating the opportunity for electronic money transfer. We can see this from the growth of M Pesa, a mobile-phone based money transfer system operating in parts of Africa (4). In such circumstances, if someone is to be paid for work, or money needs to be exchanged between individuals, it may be best for the agreed value to be held in a neutral form. In recent years the $ has often been used as that form, but offering an independent, neutral Crypto-currency, such as Bitcoin, is an alternative.

This global, ‘neutral currency’ for use in developing nations is identified as significant opportunity and use case for some observers of Crypto-currency. It will, however, rely on the ‘neutral currency’ having a stable value for people to want to use or accept it so it will probably only evolve as the currency stabilises and matures.

Crypto-currencies as a focus for speculative investments
Precious metals and commodities in general (such as oil) fluctuate in value subject to supply and demand, but also in response to investors who speculate on their growth or demise. This can create large swings in value, as a result of mismatches occurring in supply and demand, as we have seen recently in oil pricing, as a result of attempts to capture the market (as was evidenced in silver some time ago) or just as a result of volatile trading in an immature or uncertain market.   History tells us that all these aspects will at some point apply to the Crypto-currency market.

We saw it with the Internet where the .com boom of 2000 created a huge range of speculative investments in new properties and assets, some likening it to the Wild West period in US history. This passed with a period of consolidation and we have now moved into a period of continuous rapid evolution as new ideas and models test the consolidated giants.

The evolution of Crypto-currencies appears to be in a similar stage now as the .com boom was in 2000, with a rapid growth of options and new currencies appearing all the time, accompanied by wild swings in value as a consequence of the factors outlined above, that one can find in an emerging commodity markets(5). One feels though that the rate of evolution is potentially faster than that of the Internet, post the .com boom. The internet consolidation happened up to the mid-2000s and we then entered the next phases of growth as new waves of ideas arrived, such as social media and now IoT.

Crypto-currencies as a vehicle for currency conversion and trading
If one can convert a Crypto-currency into a traditional currency and back again then, it then creates the opportunity for using it as the intermediate, canonical form for currency trading. Its potential benefit is that because it has no legacy in physical currency, then transaction charges can be very low, creating the opportunity for disrupting the existing exchange markets, by creating alternative pathways. Given the huge volumes of currency trading that happens on a daily basis electronically, any disruptive model that is successful, even in a small way, will create a significant volume of transactions.

This opportunity is illustrated by the rapid growth in trading platforms and mechanisms, such as the recently announced trading options from Tether (6), which allows cross border trading between currencies using Bitcoin as the intermediary.

Crypto-currencies as an alternative for retail transactions
Given the rapid arrival of electronic wallets as a means of purchase, it would be quite feasible to see these currencies become a vehicle for defining the value held in the wallet. However, for it to be used in retail purchases on a daily basis on a large scale, the value ascribed to it would need to stabilise. The benefits of using such a currency over an electronic record of a traditional currency would also need to become clearer for adoption to grow. Nonetheless we are seeing many experiments by early adopters, for example the Circle ATM (7).

However, if one of the big retail financial services were to adopt a Crypto-currency within its portfolio it could easily trigger a rapid acceleration of adoption. We can see a potential analogy to the buying of small start-up concepts by the web giants which can trigger step change growth in new types of digital services. We are seeing the first forays into this area as established players such as PayPal start to accept Bitcoins (8).

So, as with any new digital service, adoption is likely to move via step changes, either up when critical mass is reached or down when a better alternative (in the consumer’s eyes) appears.

What about the challenges?
Whilst Crypto-currencies have many positive aspects, they also create some challenges in terms of their potential for fraud and money laundering. Transfers are not traceable and do not respect borders, which means they are potential vehicles for money laundering. The collapse in 2014 of the Mt. Gox Bitcoin exchange based in Tokyo, Japan, left many unanswered questions including where large volumes of Bitcoins had gone and also sparked fears of fraud (9).

Gambling is also a large potential driver for the use of such currencies, enabling people to get around local laws (10). There is also the simple challenge of the availability of suitable electronic wallets to keep the virtual currency safe, both from external access as well as loss of data. These challenges however are not unusual and will inevitably generate their own solutions from the growing industry of start-ups focused on the opportunity.

So what do we do about this new world of virtual currencies?
Crypto-currencies have many of the characteristics of a precious metal based coinage with a form of inherent value, rather than a government-backed currency used for trading of value. The decentralised model for the creation and control of the Crypto-currencies means that they are disruptive influence on traditional currencies as they are not easily subject to central control. Their alignment to the rapid growth of the consumer use of electronic purchase and electronic wallets, as well as their basis in the core principles of the Internet, means that they are likely to have a significant impact on our existing currency models.

The rapid evolution of capabilities and use cases for Crypto-currencies creates a number of opportunities and threats for existing financial institutions and Governments. Now is the time to consider potential scenarios in which these use cases can be embraced as once change happens it could be extremely rapid.

1.  News – Ciphrex Raises $500k to Advance Multisig Wallet Offering

2.  News – BoE worries about stability of decentralised Bitcoin

3.  News – Treasury is looking for inputs on what to do with Bitcoins

4.  News – Integration between traditional and new mobile money services such as M Pesa continues

5.  Information – Cryptocurrency stats

6.  News – Finextra: Tether brings traditional currency exchange to the blockchain

7.  News – Circle provides a Bitcoin ATM

8.  News – PayPal to accept Bitcoin –

9.  The Inside Story of Mt. Gox, Bitcoin’s $460 Million Disaster | WIRED

10. News – Gambling drives Bitcoin transactions (



AUTHOR: Cliff Evans
Cliff Evans is VP and Global CTO for Digital Customer Experience and Mobile Solutions at Capgemini.

Posted in: Behaviour Driven Development, e-Commerce, Internet of Things, Virtualisation      
Comments: 0
Tags: , , , , , , , , ,


Clean energy with crowd sourcing pic 1Traditional, non-renewable energy sources are being rapidly depleted. Besides, these forms of energy pollute the environment, and are often hazardous to extract and difficult to transport / distribute.

On the other hand, projects and initiatives to discover, provide and use clean and renewable energy are usually quite expensive. To be successful in such projects, a lot of investment is needed for research and development, implementation of infrastructure, storage or distribution facilities, and for the actual production of these high-tech initiatives. Unfortunately, such large amounts of money can only be provided by the governments or big investment funds; and with extensive funding and support comes high dependency. Therefore, only few small, private and local initiatives get noticed, let alone having any chance to grow into something that can actually make any major impact.

This has been the situation until the last few years.

However, over the last couple of years, crowd funding has been growing in popularity and importance. This alternative source of investment offers a chance for small initiatives to grow and become successful. These small projects are no longer dependent on ‘creativity-strangling’ government funding or ‘quick return-seeking’  investment funds. Instead, under the evolving circumstances, innovative ideas can be presented to huge groups of people. If the idea is sound, lots of people are often willing to participate, by investing relatively small sums of money. An added positive effect is that this type of funding offers individuals to get involved with the initiatives and be part of the process. Furthermore, there are now crowd funding platforms, specifically targeted at clean energy initiatives.

What do you think, can crowd funding help innovate and bring clean and renewable energy initiatives to life?

Here are 5 great initiatives:

Duurzaam investerenDuurzaam investeren: ‘Duurzaam investeren’ is a platform, active in Belgium and the Netherlands, that serves as a meeting ground and virtual market place for bringing together private investors and creators of renewable energy projects. The platform has helped find investors for lots of initiatives, mostly in wind and solar energy projects.

Renewable Energy Crowd funding Conference Renewable Energy Crowd funding Conference (November 5 2015, London, UK): The RE Crowd funding conference hopes to bring together experts to discuss and promote the use of crowd funding as a way to fund renewable energy projects and initiatives. From the website: “The Renewable Energy Crowd funding Conference will connect the leaders in crowd funding to project developers, investors and banks with an interest in exploring and using crowd funding for their business.”

The ACE 1 Ultra-Clean Biomass CookstoveThe ACE 1 Ultra-Clean Biomass Cookstove: The ACE 1 Ultra-Clean Biomass Cookstove is a stove that uses almost any biomass (like pellets and woodchips); and with the aid of solar energy, can burn with incredible efficiency. Compared to traditional cooking methods, the ACE 1 is faster, it uses significantly less fuel, reduces CO2 emissions, and it also provides a convenient source of electricity in remote areas. It is also convenient to charge a phone or light a small lamp. Income is used (in part) to subsidize Cookstoves for families in third world countries.

Ocean Energy TurbineOcean Energy Turbine: It’s an initiative to develop an ‘ocean energy turbine’ … something like a wind turbine, which operates in the water. If this project succeeds, it will the promise is huge. This could potentially deliver enough energy to replace non-renewable energy sources. “The Ocean Energy Turbine is the first commercially viable clean renewable energy source that has the potential to compete with and eventually replace fossil fuels and nuclear energy dependence.”


Windcentrale: Launched in 2010, this Dutch initiative has raised over 14 million euros. The idea is for private investors to raise funds to acquire wind power. Each investor becomes an owner, and (on average) receives about 500 kWh per year. By making this a community-owned initiative, investors have an even bigger involvement with the project.

To conclude, there are a growing number of initiatives and platforms, centered around using crowd funding as a way to get funding and investors for clean and renewable energy. This appears to really help push innovation, and help individual investors to get involved in these innovations.

To read the original post and add comments, please visit the SogetiLabs blog: Top 5 inspiring clean energy innovation initiatives sponsored by crowd funding

Related Posts:

  1. How to predict the unpredictable? Smart wind and solar power
  2. Internet of Things and the Energy Market #exsum13
  3. A little big data, a lot more efficient energy management
  4. Funding Internet of Things innovation out of Industrial Internet savings

Gerard Duijts AUTHOR: Gerard Duijts
Gerard Duijts has been working as a business consultant at Sogeti since 2007, consistently working on innovating and improving the way Sogeti helps clients. Gerard is SharePoint lead for Sogeti NL, and is the driving force behind SSI - Sogeti Social Intranet, an internationally recognized business solution that makes it possible to offer a social intranet solution to our clients in days rather than months, incorporating many years of experiences and best practices.

Posted in: Behaviour Driven Development, Collaboration, Crowdsourcing, Environmental impact, Green, Innovation      
Comments: 0
Tags: , , , , , , ,


BuildingI think, we  technology leaders can play a big role in saving our society and, eventually, the planet. To do that, we need to start from our home and neighbourhood.

First, let’s scope this a bit. My mother is old; and with every passing year, she needs more and more help with simple things: shopping, travel, basic technical reparations in the house, etc. She lives in a building with about 500 households, among which she knows exactly three by name. We kids help her when we can, but again, we have our jobs and do not live around the corner.

Wouldn’t it be great if all 500 households have access to a ‘Help around’ mobile app like the one below?

Help around - Mobile App

This app would open up a huge network of potential helpers for my mother, literally within walking distance! It could also bring back a bit of the ‘social cohesion,’ which we have lost in most of the cities. This app, currently named ‘the connected flat’ is just an idea and still needs a bit more thought. However, I do see potential value and even a business case in this concept. Local governments may be willing to pay for it, because it will save them money in the end.

Smart usage of modern technology is the key to solving contemporary problems… whether it is in healthcare, welfare, transport, education, sustainability, safety or politics. And if technology is the key, we technology leaders obviously are the people to open doors and lead the way. Working in technology has never been more relevant and it will be even more relevant tomorrow.

It’s a huge responsibility for technology leaders, but somehow start-ups seem to be more focused on creative solutions for real problems than the big IT companies. That doesn’t seem right to me. Let’s not leave this responsibility to start-ups, let’s combine our strengths and start developing  solutions. Let’s save our society and our planet.

To read the original post and add comments, please visit the SogetiLabs blog: Let’s ‘Help around’ and save our society with a cool app

Related Posts:

  1. Things are shaping our society – @AndreasKrisch #exsum13
  2. The Four Pillars of a Decentralized Society
  3. Save the Planet with Machine Learning
  4. The Great Sensor-Era: Brontobytes will change Society

Andre Helderman AUTHOR: Andre Helderman
André Helderman has studied both Business Information Technology and Organizational Sociology which makes clear that he is interested in the impact of technology on human behaviour.

Posted in: Application Lifecycle Management, Innovation, mobile applications, Mobility, Open Innovation, Smart, Technology Outlook      
Comments: 0
Tags: , , , ,


microsoft-techdays-paris-2015‘Ambient Intelligence’ was the theme and the guiding principle of MSTechDays that took place from February 10 to 12 in Paris.  An alternative term, I would use for this is ‘Digital Aura.’ Unlike the ‘Digital Double’ and virtual life on the net, we are in an era where the connection between the ‘Physical’ and ‘Digital’ is permanent. The smartphone is now the indisputable evidence; and any object, which is potentially connectable (today), adds to this “Digital Aura” around us. Thus, everyone becomes the transmitter and receiver of information. It is this dimension that fascinates me about the (inherent) opportunities and threats that any new technology carries / poses, if we do not control it in the right way.

SogetiLabs and VINT,  our international network of experts, published four reports on the IoT and related data, which identified how all industries, services … are involved.

The last report called the “Smact and the City” also addresses the impacts on our lives due to the transformations caused by the ‘Ambient Intelligence.’ Based on this and the last event,  ‘Digital Week in Bordeaux’ #SDBX4,  we conducted the Illinda project with four students of Ionis Group and the city of Bordeaux . This project delves into how younger generations deal with these new possibilities in the context of their everyday urban life. This is simply amazing and refreshing.

Now, coming back to the ‘Tech Days’. Nearly six sessions were led by Sogetilabs Fellows and Members. I would like to emphasise three of them:

  • - A session conducted by Yann Sese, Head of our Center of Excellence – Data Intelligence – focused on the way to monetise data and information, which is generated based on an original method for detecting ‘weak signals’ from an avalanche of information. This method called ‘Naîve Data Discovery’ allows us to capture the right information and extract the right value.
  • - The second one was about how Digital Workplace, as an ‘extended and multi channel,’ can ensure a new way of working, putting in place the ‘Any Time Any Where Any Device (ATAWAD)’ concept. This session was jointly conducted by Thomas Gennburg and Olivier Pierrat. The session highlighted space-related challenges and opportunities for any organisation.
  • - Last but not the least, the session led by François Mérand on Cloud development, projected this lever as an essential tool to help the new global environment shift from Application Development to Digital Service offerings.

I would like to conclude with one question for all organisations in this new world, which is both complex and hyper:

How can we put ‘Design to Disrupt’ in a permanent position to propose new business rules and services and (re)design new processes in order to get all the benefits of the available technologies?

You can find some answers to the above question in the first part of the study on D2D on the SogetiLabs Blog.

But please feel free to share your points of views!

To read the original post and add comments, please visit the SogetiLabs blog: Ambient Intelligence or Digital Aura – The Augmented People

Related Posts:

  1. The Silent Intelligence
  2. AOL’s digital prophet David Shing and The Age of Context
  3. Leading Digital Equals Data-driven Design, Development, Delivery, and Delight
  4. Digital Transformation Transit Map

Jacques Mezhrahid AUTHOR: Jacques Mezhrahid
For 25 years, Jacques has been working in the software services industry and was involved in various and innovative projects for different market sectors. In charge of Digital Convergence Offering for Sogeti France since 2012 combining mobility, collaboration, architecture and Big data, he and his team are developing solutions, skills and expertises within the company.

Posted in: SogetiLabs, Technology Outlook, Virtualisation      
Comments: 0
Tags: , , ,


Cloud Dataflow“My hypothesis is that we can solve [the software crisis in parallel computing], but only if we work from the algorithm down to the hardware — not the traditional hardware first mentality.” – Tim Mattson

I wanted to start with this eloquent quote from Tim Mattson to highlight today’s most important need generated by Big Data and IoT: an efficient large data processing model. Parallel computing technologies are providing tangible answers to such problems, but are still quite complex and difficult to use.

On June 25, 2014, Google announced Cloud Dataflow, a large-scale data processing Cloud solution. The technology is based on a highly efficient and popular model, used internally at Google, which evolved from MapReduce and successor technologies such as Flume and MillWheel. You can use Dataflow to solve “Embarrassingly Parallel” data processing problems – the ones, which can be easily decomposed into “bundles” for processing simultaneously.

Fortunately, I had the chance to be whitelisted for the Dataflow private alpha release and was, therefore, able to put my hands on this very new Google Cloud Service. I will not keep the suspense up any longer: I have been really impressed by the solution and am pleased to share five reasons why every business, concerned with large data processing, should at least try out the Google Cloud Dataflow solution once:

1 – Simple: Dataflow has a unified programming model, relying on three simple components:

Cloud Dataflow program

A- Pipelines – Independent entity reading, transforming and producing data
B – Pipelines data – Input date that will be read by the pipeline
C – Pipelines transformations – Operations to perform on any given data

Building a powerful pipeline, processing 100 millions of records, can be done within 6 lines of code.

2 – Efficient: I am used to performing computing on large data sets. The program was generating statistics pertaining to more than 56 millions records and took 7 to 8 hours on a 12 cores machine. Whereas, it took about 30 minutes with DataFlow. It is also very important to notice that Dataflow provides a real-time streaming feature, allowing developers to process data on the fly.

3 – Open Source: Dataflow consists of two major components: an SDK and a fully managed Cloud platform, managing resources and jobs for you. The Dataflow SDK is an open source & is available on GitHub; this means, a community can quickly grow around it. It also means that you can easily extend the solution to meet your requirements.

4 – Integrated: Dataflow can natively take inputs and provide outputs from/to different locations: Cloud Storage, BigQuery & Pub/Sub. This integration can really speed up and facilitate adoption if you are already relying on these technologies.

5 – Documented: The Dataflow documentation website is well furnished with theories and examples. If you are familiar with the Pipelines concept, you should then be able to run your customized pipeline in less than an hour. Many valuable examples are also provided on the SDK GitHub page.

Dataflow is in alpha, but is already promising to be a strong player in the world of Cloud parallel computing.

Stay tuned for more posts on this topic.

To read the original post and add comments, please visit the SogetiLabs blog: Google DataFlow: Get rid of your embarrassingly parallel problems

Related Posts:

  1. How do Facebook and Google handle privacy and security?
  2. Google+ is dead, long live Google+!
  3. Is Google God?
  4. EAZE introduces “Nod To Pay” service combining Go

Jean-Baptiste Clion AUTHOR: Jean-Baptiste Clion
Jean-Baptiste Clion is Google Practice Technical Lead for Sogeti Switzerland in Basel since 2013. In this position, he is in charge of all technical aspects regarding Google activities. This role demands strong research and innovation skills in order to design and develop cloud solutions matching customers’ requirements.

Posted in: Big data, Cloud, integration tests, Internet of Things, Transformation      
Comments: 0
Tags: , , , , , , ,


skipping requirementsRunning a project with Agile or Scrum methodologies does not mean that it’s ‘OK’ to skip functional requirements. Once, there was a company that gave application mockups to its developers without providing any details about what the application was supposed to do. It was left to the developers to guess what actions the application should perform, based on the mockups. As expected, this led to a lot of iterations and unnecessary exchanges between the business and the developer. Days (of work) were wasted, as the changing functional requirements were communicated verbally to the developer. When the application was ready to be tested, the requirements changed again, because there were former business users in the QA department.

Essentially, application requirements should consist of mockups (also known as wireframes), business rules, a functional description of what each button/link does, and a list of the fields with their types and lengths. Requirements should be completed and signed off by the business before development begins. It is best if requirements are completed in related screens, so that the architect gets a broad view of things and is able to guide the developers accordingly.

I am not proposing Waterfall, where all the requirements of the entire application are addressed up front, while the developers wait. Ideally, this is what needs to happen: The business analysts would have their own iterative requirements cycle, coming before the iterative development cycle. In my opinion, there should be one business analyst for every four or five developers. If there are fewer analysts, it creates a bottleneck.

The goal of having good functional requirements, which are signed off by the business, is to reduce scope creep and changes. This in turn will help reduce costs. Also, developers are typically the most expensive resources (after the managers) in an IT department. Therefore, it’s wise to make the best use of their time and skills.

Would you be able to build a car with just a picture and no specifications? I suppose not. So, neither should you attempt to build any software that way.

To read the full post and add comments, please visit the SogetiLabs blog: Skipping Functional Requirements in Projects is a Costly Mistake

Related Posts:

  1. Multi-disciplinary? Cross-functional!
  2. Everybody tests with Agile
  3. Why not document software development projects?
  4. Going further than monkeys: Let’s document requirements effectively

Greg Finzer AUTHOR: Greg Finzer
Greg Finzer is the Custom Application Development Community of Practice Lead for the Sogeti Columbus Region. His duties include identifying technology trends, facilitating access to training & certifications, developing architecture expertise, supporting sales & delivery, and increasing participation in the local developer community.

Posted in: Developers, Scrum, waterfall      
Comments: 0
Tags: , , , , ,