Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Data cognition engines can save millions in big data costs

In business, as in life, a pursuit of perfection often results in exponential increases in effort to achieve only minimal increases in quality.

With this in mind, we determine which tasks require the highest level of quality and which do not. Raising my children is an area where I strive for excellence and it requires my continual focus. The emotional reward obviously justifies the significant investment, and I am grateful for this opportunity, daily. On the other hand, ensuring that the lawn is mowed weekly takes up only as much thought that’s required to get the lawnmower up and running and push it around the yard every other week as my mind wanders aimlessly.

Most of what we do falls somewhere between those two extremes. And part of being a mature, effective professional, and human being, is to maximize our effectiveness by figuring out the minimum quality of output required for each task, achieving it and moving on to the next task.

There is an exponential increase in costs to achieve minimal increases in accuracy around big data as well.

A new class of high-performance analytics based on data cognition engines takes a new approach to big data, focusing on answers that are “near perfect” and reducing costs by hundreds of thousands.

This new AI-driven approach allows the neural network at the heart of the data cognition engine to “learn” data sets, and can provide results that are more than 99% accurate, without any ongoing access to the data itself, and by reducing the data set size by many orders of magnitude.

Creating a new ‘brain’

A good analogy to how this new category of data cognition engines operates is that of a young child who speaks no English. Now take that child, teach her the language and then give her only one book to learn, say Harry Potter and the Philosopher’s Stone, and ask her to read it several times. At the end of this process, the child will become an expert in the narrative of the book and will be able to answer questions relating to the book with great accuracy even if she cannot quote it verbatim.

Similarly, data cognition engines teach a neural network a new language — in this case SQL, the language used to retrieve information stored in databases. Next, this new SQL-speaking “brain” is asked to read only one book — a specific data set — many times and study it well. At the end of the process, we are left with an extremely light brain that can answer any question relating to the data set without the need to retain the data set or consult it ever again.

Now, here is the fascinating part. A child would likely need to spend a few years in school learning the English language to the level of proficiency needed to read, understand and absorb Harry Potter’s richly imaginative magical universe. In contrast, the data cognition engine’s brain can achieve the equivalent to this task in just a few hours!

Next, this new brain no longer needs to have the book next to it. Moreover, since it is software-based it can be easily cloned and spread around the edge of the network, operating IoT devices at incredible speeds and extreme levels of accuracy.

This technology isn’t science fiction — it already exists and is being used in real life.

An electronics manufacturer which has deployed a data cognition engine managed to replace a 6 gigabyte data set containing 2 billion records with hundreds of columns which required 30 minutes to query, with a 2 megabyte data cognition engine. This data cognition brain can answer questions and provide its “hunches” at 99% accuracy or better and within 0.1 millisecond. This represents a 30,000x reduction in storage needs and 18,000,000x speed improvement.

Returning to our Harry Potter book analogy, imagine replacing this 1.2 pound book with a 0.02 gram brain — equivalent to the weight of small snowflake. And instead of taking, say, a month to read the full 422 pages of the book, the new brain can provide an answer about the narrative in a fraction of a second — literally, a blink of an eye.

Bringing secure supercomputing to the edge

Data cognition engines have a number of disruptive implications for IoT. Data cognition engines put the insights of tens of billions of rows of data into a sensor, a phone or a wearable device, offering the power of big data in a small, portable, cost-effective and secure IoT package.

The applications for manufacturing — where a sensor can be making quality control decisions formerly handled by an experienced human, based on the power of gigabytes and terabytes of data — are already manifesting. However, this is just the tip of the iceberg.

Imagine that your wearable fitness device with an integrated EKG reader — something that exists now — can look at your readings and compare them to the readings of everyone wearing the device with a similar health profile (age, weight, etc.) across the country or the globe. It can then compare those readings against actuarial charts and health data. Using those two massive, complex data sets — which will reside securely in the small amount of memory in the device through a data cognition model — it could alert you to a serious, undiagnosed health issue.

Because it doesn’t need ongoing access to data after it learns a massive data set, data cognition engines can offer major benefits around data privacy and security as well. Most users do not need to regurgitate row-level data from massive data sets in order to provide insights. For the majority of use cases, data cognition engines also present a secure system for data that can’t be hacked or reverse engineered to provide sensitive personal information — because the sensitive information just isn’t there to steal.

No replacement for perfection

In data, as in life, there are times when only perfection will do.

The vast majority of big data use is focused on extracting insights from massive data sets, answering questions like “What percent are sales up in APAC regions across the last three quarters?” and “Who are the people most likely to purchase a dog during the month of February?”

However, there are times when row-level accuracy is a must. To be sure, when you need to know one of your customer’s credit card numbers or their exact bank balance, cognition engines are not the right tool for the job. This is when you’ll need to go with big iron and crunch the really big data for that one specific piece of information.

In contrast, data cognition engines will help us focus on the tasks where accuracy — delivered securely, inexpensively and quickly — is better than slow perfection. And, with 99% accuracy, the line between those two classifications is blurring.

Many business cultures push for perfection in everything we do. In my experience, however, the people who are most effective are those who understand when near-perfect is perfectly acceptable. For those who reach this realization in the world of data, there are now major rewards to be gained and advances that will change how human beings access, and benefit, from data insights.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

CIO
Security
Networking
Data Center
Data Management
Close