security

Hidden Biases in Cybersecurity Reviews – And How to Use Them – eSecurity Planet


eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Technology reviews can be a temptingly easy way to gain insight into the often impenetrable world of enterprise cybersecurity products, but you need to know how to use them.

The fact is that while all technology reviews have some value, all reviews also contain hidden biases — and sadly, those biases are often overlooked and misunderstood by buyers. Ferreting out those biases is important if we’re going to find tools that will make a difference in our IT environments. To help, we’ll cover the pros, built-in biases, and suitability of each type of technology review and how to use each review type as a buyer.

The table below contains a summary for the four major types of tech reviews as well as links to the detailed discussion. We’ll also give a brief overview of the common problem that applies to most reviews: A poor understanding of the statistics needed to produce credible results.

Review Types Key Pros Built-in Biases “Best for” Buyer Type
Quadrant Reviews Very in-depth analysis by industry experts Sample size, big vendors, big customers, happy customers, have time, features over capabilities biases Largest enterprises and government organizations
Customer Review Websites Specific testimonials from customers, list of vendors in the market Sample size, compensated review, moment in time, extreme (happy or mad) customers, have time, and hidden context biases Any business looking for competitors in a category and potential pros and cons
Technology Review Articles Widest perspective, least hidden biases Sample size, big vendor bias, broad circumstance bias, features over capabilities bias, hidden context biases Any business looking for in-depth coverage of a broader range of vendors and their features
Reddit Threads and User Forums Interactive and very specific Sample size, black hat help, compensated reviews, moment in time, extreme (happy or mad) customers, have time, and hidden context biases Any size business buyer looking for specific feedback on specific issues or technologies

The Common Problem With Reviews: Statistics

To be statistically sound, a sample must be large enough to represent the entire population of interest. Sample size calculators can provide an estimate of the minimum sample size for surveys or the minimum number of organizations to interview for reviews to have high confidence in the answer and a low error rate.

However, one assumption buried in that math is that there will be an even distribution of respondents. In other words, the sampling procedure will ask questions of each type of customer in an equal way so that each point of view will be adequately represented within the respondents.

For example, when surveying the market for email security, survey results should include respondents from each category that might represent different needs. The survey should have representation from categories such as: 

  • Company size: small to large
  • Industry vertical: healthcare, energy, etc.
  • Organization type: corporate, education, utility, non-profit, government
  • International regions: Asia, South America, North America, etc.

Without even representation from across the industry, the survey or review can only say something about the specific subgroup represented. We will provide specific examples about this as we dig into each type of review.

Market Intelligence Reviews: Best for the Biggest Buyers

Experienced industry professionals will be intimately familiar with the Gartner Magic Quadrants, Forrester Research Reports, and similar market intelligence reports that segregate cybersecurity companies into leaders and laggards. The network security, next generation firewall (NGFW) and other tool vendors that find themselves in the leader category will immediately push out public relations campaigns to make sure potential buyers know about their leadership status, and vendors in other categories will promote their positive mentions too.

These reports suffer some significant sampling errors in their analysis that limit their usefulness. These don’t invalidate the hard work of the analysts, but it does suggest that these reports only really work well for buyers working for the largest companies for the following reasons:

  • Sample Size Bias
  • Big Vendor Bias
  • Big Customer Bias
  • Happy Customer Bias
  • Have Time Bias
  • Features over Capabilities Bias

We will explore each of these biases in more detail below and show why these biases limit their usefulness. Even among this limited audience, the analyst will need some differentiating aspect and will often turn to the number of features, which further biases the results.

Buyers should still use these reports to understand the market and the trends, but put less emphasis on the position of the vendors in the rankings. Poorly ranked or excluded vendors may offer the perfect solution for the buyer’s specific needs — especially if they are a smaller organization that does not need every feature in a tool.

Sample Size Bias

First and foremost, for a sample to reflect an industry, it needs to be significant and evenly distributed. In an Enterprise Email Security announcement, Forrester discloses that “37 customer references” were interviewed to create the report. Using a sample size calculator indicates that this only provides a 95% confidence level and a 5% margin of error for a population, or customer market of 40 companies.

Readers Also Like:  Crypto4A's QxHSM™ forever revolutionizes the Hardware Security ... - PR Newswire

Although the Market Intelligence analyst companies try to position the reports as universal, a 40-company population doesn’t even represent the Fortune 100 companies, let alone the entire market of possible buyers. These reports cannot truly be reflective of an industry as a whole and can only be considered statistically valid for the specific types of customers interviewed — which are not disclosed.

Big Vendor Bias

Researchers typically will start with the industry market leaders and ask them for customer referrals. The analyst is only human, so they have limited capacity to conduct in-depth analysis of every competitor in the marketplace. This inevitably means that the biggest vendors get the most attention — in fact, market share is often used as a ranking factor — and not every promising startup will get the evaluation they might merit.

Big Customer Bias

Vendors want to refer customers that will impress an analyst. For example, protecting the 14,417 students of the Davenport, Iowa school district is important, but nowhere near as impressive as a tool that protects the 186,000 employees of the Ford Motor Company. If given the choice, a vendor will always refer Ford to an analyst — it just carries more weight. Analyst firms will often cite the impressions of their own clients, which again will slant toward larger companies.

Happy Customer Bias

Vendor clients will give an honest review to the analyst, but the vendor doesn’t want to look bad. Instead of providing references from both renewing customers and customers that decided not to review, the vendor will always provide references for their happiest customers.

Have Time Bias

Vendors typically need to ask their customers for permission to use them as a reference. Some customers simply will not have the time to participate in lengthy interviews with analysts and will be excluded from the process. Only customers that have time will provide information to the review process. This will limit the participation of overburdened users of a product, who might otherwise have much to say about whether a tool helps their efficiency.

Features Over Capabilities Bias

Analysts interview customers and may not have an opportunity for hands-on access to the tool. Of course, when interviewing happy customers, they will rarely have significant complaints that allow for distinction and differentiation among the vendors.

Instead, the analysts will often turn to the number of features available in the tool and the possible solutions these feature sets might solve. The analyst, backed by positive reviews and relentlessly positive marketing materials from the vendor, might assume the features all work perfectly well. Therefore, more features can result in a better ranking without regard for how well those features work. And how well those features work is one of the most difficult things to ascertain in a market that suffers from an “information asymmetry.”

Peer-to-Peer Review Websites: Best for Researching Smaller Vendors

Peer-to-peer review websites such as Gartner Peer Insights, G2, and TrustRadius allow users of technology to post reviews about products. These guides promote their reviews as an accurate representation of the usability and effectiveness of the technology category and the vendors covered on the site. Buyers feel reassured by a mix of positive and negative information and assume it represents authentic information.

While not as profound an issue as Market Intelligence Reviews, peer-to-peer reviews also suffer significant sampling errors that limit their usefulness. These sites also suffer undermined credibility because of a number of significant issues:

  • Sample Size Bias
  • Compensated Review Bias
  • Extreme (Happy or Mad) Customer Bias
  • Have Time Bias
  • Hidden Context Bias

The exploration of each of these biases at best make these reviews representative only for motivated customers with extra time. Some hidden biases can even invalidate specific customer reviews and skew aggregate rankings. However, in spite of these issues, buyers should use these sites to learn about a broad spectrum of competitors within a category and obtain an idea of potential issues to investigate further.

Sample Size Bias

As with Market Intelligence Reviews, many peer-to-peer ratings suffer from a sample size bias. While many tools will have hundreds of ratings and reviews, without understanding more detail about the size of the market or the type of organization providing the review, it is impossible to determine if the results are statistically significant. At best, the buyer should read the results as “of the organizations that reviewed this tool, the rating is…”

Compensated Review Bias

Although peer-to-peer websites work hard to eliminate bot users, they still can be biased by reviewers that are paid to write positive or negative reviews without transparency. Most often a buyer might detect paid reviews from a series of positive reviews containing too much similarity between the reviews, which suggests a required script of some sort.

Readers Also Like:  Perpetual security incident spreads; client data compromised. - The Australian Financial Review

Some companies will even hire people to post negative reviews about competitors and it will always be difficult to tell. However, some of these negative reviews may stand out if the same type of review is present on multiple peer-to-peer websites, or if the complaint happens to align with the strength of a competitor.

Finally, even legitimate reviews by registered and validated reviewers may be biased by compensation. We typically think of compensation in the form of cash, but legitimate customers may also be compensated through product discounts to post positive reviews.

Moment in Time Bias

Perfectly legitimate reviews might have been absolutely accurate when published, but become invalidated over time. Peer-to-Peer sites can filter by date, but do not delete older reviews nor exclude them from the ratings results over time.

Buyers need to keep in mind that older reviews may no longer reflect the current capabilities of the tools. Additionally, the old reviews may reflect novice mistakes or abilities for the reviewer. Gartner Peer Insights allows filtering for reviews in the last year, which is a good start for learning about more recent issues and experiences.

Extreme (Happy or Mad) Customer Bias

Customers indifferent to a product will rarely be motivated to post a review. Typically only the very happy and the very unhappy customers will feel strongly enough to take the time to praise or complain about a product in public. While this may provide useful positive and negative feedback, it obscures how many customers might feel the product is “just OK.”

Have Time Bias

As with Market Intelligence Reviews, customers must have excess time to devote to providing a review. Unless motivated by extreme feelings or compensation, very busy customers will often not make time to log into a site and type out a thoughtful review.

Hidden Context Bias

Neither positive nor negative reviews provide the context of the infrastructure or the needs of the organization. Was the positive review based on a simple requirement that does not test the capabilities of the solution? Was the negative review based on infrastructure insufficient for deployment? Were the installers or users even technically competent? Anecdotal peer-to-peer reviews without context limit the usefulness of the information.

Technology Review Articles: Best for Perspectives

Technology Review articles, such as the many found here on eSecurity Planet, provide a broader coverage of technology than Market Intelligence Reviews with less hidden biases than peer-to-peer reviews or tech forums. However, buyers need to understand how to use the reviews appropriately considering the biases that remain such as:

  • Sample Size Bias
  • Big Vendor Bias
  • Broad Circumstances Bias
  • Features over Capabilities Bias
  • Hidden Context Bias

We will explore each of these biases in more detail below and show why these biases make these reviews only representative for the technology in general and not specific circumstances. Buyers may need to look deeper into the author and the Technology Review publisher to fully understand potential biases, yet these reviews regularly help to understand distinguishing features and help select vendors for further investigation.

Sample Size Bias

Although most Technology Review websites do not claim to be statistically representative of all companies, the opinions of the writer, and possibly the editor, make for an incredibly small sample size. Experience can help to provide a broader perspective, but our reviews will remain the perspective of a small number of individuals.

Big Vendor Bias

Technology Review sites have the freedom to cover more companies than a Market Intelligence Review analyst, but our reviews retain some Big Vendor Bias. While we will put effort into considering all relevant vendors, new vendors and smaller vendors that have less available information, such as customer reviews or independent test results, may be deemed too niche for coverage in best-of articles.

Broad Circumstances Bias

We cannot know who may encounter a review article on our website or their circumstances. Therefore, we write our articles for as broad an audience as possible, from the inexperienced IT novice helping out a small business to the IT expert working for the largest enterprise. Where possible, we point out the suitability of the tool, but our broad circumstance bias may make it less directly applicable to a specific buyer’s perspective.

Features Over Capabilities Bias

Although our writers often bring first-hand experience to bear when writing our review articles, we often do not have current hands-on access to specific tools, let alone access to all of the tools in a category. Even when we have access to demonstrations or can create a test environment, the results of our testing will be specific to that demo or lab.

While we can often read between the lines of marketing materials and user manuals, we cannot know for certain the true performance for each feature of a tool or service. Instead, as with Market Intelligence analysts, we often turn to the number of features available in the tool and the possible solutions these feature sets might solve.

Readers Also Like:  Groups on Google Payroll Flood Supreme Court With Briefs Defending Google - Washington Free Beacon

Hidden Context Bias

As authors, we are naturally biased by our experience. However, unlike a Peer-to-Peer reviewer, writers for Technology Review articles tend to be public and we post bios to help readers understand our experience and determine potential biases.

Some context biases will be less obvious than others. For example, if we send out requests for quotes to 12 vendors and only receive information back from three, then we will naturally have more access and be able to cover those three respondents in more detail.

Credible Technology Review websites will keep the writers separated from the sales teams to ensure unbiased reviews. eSecurity Planet, for example, publishes our Editorial Policy, and we will always note sponsored content. Unfortunately, not all Technology Review websites will be so transparent and the sponsored content may also result in a hidden context bias.

Reddit Threads and Tech Forums: Best for Specific Questions

Reddit technology threads and tech forums provide an opportunity for direct peer-to-peer communication and collaboration. Buyers or tech team members will often post questions to the other members to solicit feedback on product categories, specific products, solutions for specific problems, and more.

Unlike other options, threads and forums allow for unfiltered responses from peers, which will often be perceived to be much more credible. However, buyers need to understand the inherent biases such as:

  • Sample Size Bias
  • Black Hat Help
  • Compensated Review Bias
  • Moment in Time Bias
  • Extreme (Happy or Mad) Customer Bias
  • Have Time Bias
  • Hidden Context Bias

The exploration of each of these biases will help to explain why these reviews are best for specific questions regarding specific circumstances. Some hidden biases might invalidate specific responses or potentially expose the buyer’s organization to harm. However, in spite of these issues, buyers can use these sites effectively at a high level to understand specific issues in specific contexts.

Sample Size Bias

As with the other review sources, forums and threads suffer from a sample size bias. Typically, only a handful of people will reply to specific questions, but even with hundreds of responses, the reader will lack the context to understand how the responses might reflect the industry as a whole. At best, the buyer should read the results as “of those that replied, their opinion is…”

Black Hat Help

Anyone can join most forums or Reddit threads and validation tends to be quite casual, which can allow for malicious hackers, to be present. While Black Hat technology reviews may be relatively genuine, they may also probe for unnecessary details to help understand if they have located a potential victim. Malicious responders may also attempt to push low-quality solutions or encourage the download of “free” software laden with malware.

Compensated Review Bias

Forums and Reddit threads do not police for compensated opinions. While vendor replies will be transparently biased, a buyer will have no transparency into paid representatives that lie about their affiliation or do not disclose compensation. As with Peer-to-Peer websites, paid reviews can be used to promote products or smear competitors.

Moment in Time Bias

Perfectly legitimate feedback on a forum might absolutely represent the genuine experience of the reviewer, but that experience may be out-of-date and may not reflect the current capabilities of the tool or even the improved abilities of the reviewer. Most reviewers will not be eager to qualify a bad review with “back when I didn’t know what I was doing…”

Extreme (Happy or Mad) Customer Bias

As with any customer review, an indifferent user usually won’t care enough to provide feedback. Only very good or very bad experiences motivate users strongly to praise or complain about a product in public. While this may provide useful positive and negative feedback, a buyer will not be able to understand the size of the average-experience audience.

Have Time Bias

As with any user review, customers must have excess time to devote to providing a review, but this bias is even more pronounced on forums and within Reddit threads. In addition to drafting a response, the writer also has to have the spare time to monitor the site or thread for topics on which they can contribute.

Hidden Context Bias

As with Peer-to-Peer websites, initial responses to forums and Reddit threads do not generally provide the context of the environment or the respondent’s experience with their feedback. Opinions will be skewed by anecdotal experience unique to the respondent unless the IT environment is specifically discussed or the competence of the respondent is recognized by the reader or other peers.

This bias can be reduced by specifically asking for the experience or IT environment for additional context. However, the buyer should also keep in mind that not all responses will be genuine. Some people may lie out of embarrassment, others to conceal malicious motives, and others may simply recall details incorrectly.

Bottom Line: Understanding Reviews Makes for Useful Reviews

The British statistician George Box wrote “All models are wrong, some are useful.” The same principles should be applied by buyers to all reviews. Fortunately, even a cursory understanding of biases can provide enormous perspective on the types of problems inherent in different types of reviews and allow a buyer a chance to use review information constructively and for their own benefit. Keeping the limitations in mind as you sift through the available data will make you a better-educated buyer — and at a minimum, help you ask the right questions to fill in as much of the missing information as you can.

For a sampling of our product reviews, see The 34 Most Common Types of Network Security Solutions



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.