Will “Big Data” Solve All Our Problems?

  1. Big Data Article By Gence Emek

Big Data is a powerful tool with the potential to assist in making some of the world’s toughest decisions. The speed at which large samples of raw data can be collated and crunched continues to improve outcomes for large organisations worldwide, both government and private sector.

“Big Data”, the number crunching, cross-referencing and aggregating process that comes about as a result of the accumulation of large datasets, is a key tool of scientists, marketers and social researchers in getting the answers they need.

Retail, for example, is an industry in which Big Data is crucial in targeting the right market with the right products. Through checkout analysis, store-by-store trends and loyalty programs, retailers hold a large amount of information on shopping habits, which they use to make more informed decisions about stock and product positioning.

In the lead-up to the 2014 Atlantic storm season, Wal-Mart executives collated and mined data from previous natural disasters and, realising their top-selling products were beer and junk food, stocked their stores accordingly.

However, raw data has two fundamental issues that mean it cannot be converted into useful information without human input. Firstly, data has no use unless it is interpreted in the context in which it is collected, and second, importantly, data fails to take into account those elements of the human experience that we consider crucial for effective decision making.

Computers lack perspective

However, data is only as useful as those who interpret it. Information on Pop-Tart sales jumps, for example, is only informative if interpreted in the context in which the increases occurred.

Data indicates that the cause of the US subprime mortgage crisis was the inability of heavily indebted households to meet mortgage repayments, leading to increased number of foreclosures. Why were these households heavily indebted, though? Who assessed these borrowers for credit, and why were they willing to extend loans to those who could not afford it?

To answer these questions, the context of the mid-2000s real estate boom and the surrounding investment climate must be examined. Statistical credit underwriting “score cards” were developed to rapidly assess credit applicants, and the big market players tweaked their underwriting criteria to approve more applicants than usual.

The data doesn’t reveal any of this, despite being crucial in the development of the late 2000s recession. The data reflects that in 2007, 90-day mortgage delinquencies were triple the rate they were in 2005 and that foreclosures were up 79% on 2006 levels.

What cannot be reflected by data is the ‘why’: why investors turned to risky credit default swaps and why banks loosened their underwriting standards. This requires some human input and explanation by way of perspective.

Data doesn’t tell the whole story when making decisions

Statistical credit scoring is also reflective of another prime example of why Big Data can never solve all our problems.

Unlike computers, when humans make decisions, they take into account not only the data available to them, but also the ‘softer’ aspects of the decision – the intangible, experiential component known as qualia.

Under a strict credit scoring system, a loan applicant with a poor FICO score or other credit blemishes may be instantly declined for a housing loan by the underwriter, with no further thoughts given.

However, if a human was making this decision, they may be inclined to take a different approach. Instead of a rejection and accompanying form letter, the human may discover that the applicant was in fact a victim of identity theft two years ago, with fraudulent credit accounts taken out in their name. They may find that the applicant was unaware of an outstanding utilities debt because they had changed their address – something an algorithm could never test, given they do not leave room for error. They may forgive the applicant for their poor credit score and approve the otherwise creditworthy borrower, who then goes on to make every payment on time.

Conversely, an applicant who may be instantly approved under statistical credit underwriting conditions may face a different outcome if an experienced loan officer reviewed their application. Small inconsistencies in payslips, too minor to be picked up by automated systems, or an in-person applicant whose face does not quite match their photo identification would likely be declined.

In this way, while Big Data is useful to assist decision-making, computers cannot take into account one of the most crucial components of any decision made by a human – common sense.


Please like & share:
Posted in:
About the Author


My name is Gence Emek and I’m a tech enthusiast currently living in London. I love to travel and I’ve been to over 30 countries but still haven’t been to the US. My goal for 2015 is to see New York and San Francisco. Other than London, my favourite city is Istanbul.

Leave a Reply

Your email address will not be published. Required fields are marked *