Disruptive Discrimination – a Down Side of Big Data
Big data has changed how companies hire, monitor performance, and manage worker risk. While analytics are often presented as objective decision-making tools, their real-world use raises serious legal and ethical concerns—especially for workers with disabilities. The risks of data-driven discrimination in hiring and workforce structuring are now more significant than ever.
A recent industry discussion highlighted by Big Data research shows that employers increasingly rely on predictive insight without fully evaluating whether those insights disadvantage protected groups—particularly disabled employees and applicants.
What Is Workplace Discrimination at Its Core?
Discrimination happens when a current known characteristic—like age, race, gender, religion, disability, or national origin—is used to treat someone worse than others in hiring, pay, promotions, work assignments, or termination decisions. Historically, even the most serious employment discrimination cases were limited by lack of mass data. But big data introduces something entirely new:
- Predictive analytics
- Automated risk-based hiring models
- Health behavior inference from consumer data
- HR employee profiling without direct medical disclosure
Today, employers may infer health and disability risk not only from genetics but also from:
- Online searches
- Consumer purchase history
- Credit and financial behavior
- Pharmacy and insurance claim trends
- Lifetime digital activity patterns
If this data is paired with HR-held pharmacy reports or insurance records, companies may claim they are measuring “risk.” However, legal challenges often argue that inferred health risk becomes de facto disability discrimination when it influences hiring choices.
Why Predictive Analytics Can Become a Legal Trap
Predictive analytics can estimate future health needs, surgery likelihood, pregnancy risk, or chronic illness probability. But the problem is that predictions do not prove a worker cannot perform a job today. Using algorithmic assumptions to deny opportunity may:
- Violate anti-discrimination statutes
- Create disparate impact liability
- Trigger regulatory inquiries
- Invite class-action challenges against employers
It remains unclear how companies will ultimately apply predictive health data. Yet, the economic incentives for risk-based screening make misuse extremely likely unless companies adopt compliance safeguards.
How Employers Can Protect Themselves from Liability
Many leading companies now attempt to offset liability using structured compliance programs including:
- Third-party algorithmic bias audits
- Transparent documentation on hiring criteria
- Elimination of inferred-health factors in “green-lighting” candidates
- Legal review of HR analytics
- Human override processes (not only automation)
- Regular policy corrections for disability compliance
The Wall Street Journal and Fortune published articles about the problem this week, and it deserves attention. I’ve included a link to the Fortune article here (Fortune Article), as it is not behind a paywall, although the WSJ article was on point, too. Let me know your thoughts.
I’ve also posted this on Facebook (here), if you’d like to interact with us there.






