Data is Gathered.

Intelligent Data is Made.

Discover how InsideView makes data intelligent.

Data is not created equal.  Some is accurate.  Some is not.  Some is actionable, while much of it is worthless.  What makes it useful and trustworthy is the methodology behind it. How it’s gathered, where it comes from, how it’s organized, analyzed, and made richer than what you can gather on your own. That’s what makes it intelligent.

“With the volume of Big Data growing exponentially, and changing at an ever-increasing rate, data providers need a methodology – processes and technology – that can gather, analyze, and validate massive amounts of data at scale.  Otherwise the data will be inaccurate and irrelevant by the time it’s delivered to you.”
– Jason Muldoon, InsideView VP Product Development

The MTV Methodology

InsideView employs a proprietary “MTV” methodology to gather, analyze, and validate data.

M Multi-sourced. InsideView gathers data from multiple sources – more than 40,000 editorial, news, financial, and social sources – including world-class vendors such as Thomson-Reuters, CapIQ, and Equifax.

Of equal importance, we employ multiple data creation and gathering methodologies, unlike a number of commercial data providers. Why is this important? Because each methodology has strengths and weaknesses.

For example, crowd-sourcing produces a large volume of data, but the data tends to have a high percentage of duplicate and stale records.


 Click on any of the methodologies to learn what it is and understand its strengths and weaknesses.

T Triangulated. Triangulation is at the heart of InsideView’s algorithmic technology, which also takes advantage of machine intelligence and text analysis to validate and make sense of the disparate and conflicting information. Let’s break it down.

Machine Intelligence
Simply put, our data scientists teach computers to “think” like humans – to take disparate bits of information, see the relationships between the bits, understand their meaning, and convert the result to useful information. Once “taught,” the computers can process massive amounts of data with human-like logic at infinitely faster speeds. Which is necessary in a world where Big Data is getting bigger every second. And the rate at which it changes is growing even faster.

We use this machine intelligence to triangulate data and analyze text.


The essential premise of triangulation is that consensus among multiple sources is the best indicator of truth. The broader the agreement, and better the source, the more reliable the data.

We pull data from thousands of sources, not simply to amass enormous amounts of data, but to triangulate each point to determine its validity and usefulness. Think of the data we aggregate as an extremely loud conversation, with thousands of sources chattering at once, calling out employee counts, revenues, contact information, and so on. It’s chaotic, with wild disagreements, people changing their minds, and voices everywhere insisting that they’ve got the latest news.

Our advanced triangulation technology is able to pick out, from this uproar, what different sources agree about. It’s also able to generate new data. For example, by following contacts over time, we’re able to assemble employment histories used to strengthen our Connections.

Bottom line: Triangulation is our primary means of validating data, which must meet our high standards for relevance and reliability to become part of the InsideView database. It’s the reason our contact data is up to 20% more accurate than that of other data providers, quarter after quarter. And it’s how we make our data more intelligent.

Text Analysis
Text analysis – a form of natural language processing – comes into play for unstructured data, such as news articles and blogs. It enables our computers to “read” blocks of text to determine relevance, categorize insights, and extract facts that may be used to inform our data.

Often, news articles are the first to report critical business events that can indicate a change in a company’s name, structure, location, etc. This analyzed text feeds into the triangulation process, along with the structured data.

V Validated. Triangulation is our primary means of validating data, as the most reliable way to automatically and continuously authenticate large volumes of data at scale. We then go one step better, providing an easy mechanism for users to flag and correct inaccurate data points, which our editorial staff then verify before updating the official record.

A History of Technology Leadership

InsideView was founded nearly a decade ago to harness the exploding universe of B2B data and deliver it to sales and marketing organizations in a way that was easy to consume. We knew we couldn’t rely on traditional methods. We turned to technology, instead.

When others were relying on a single source of data, InsideView gathered data from multiple sources – today more than 40,000. We leveraged triangulation technology to compare the various sources, resolve conflicts, and determine what was most reliable. While others relied on human editorial teams to validate data, we developed algorithms to validate data with human-like logic, at speeds humans could never touch.

Today, others talk about having thousands of sources and using triangulation. They’ve been jumping on the Insights bandwagon, as well – adding news feeds to their data pools. We were there years ago – not only delivering real-time news and alerts, but using technology to analyze text, weed out irrelevant feeds, and organize what remained into meaningful categories based on key business triggers. We still use all these methods today, and continue to improve on them, leading the way with patents pending on several innovations….while others work to catch up.


SSU   |   BLOG