The use of artificial intelligence (AI) models to make decisions about actual or potential employees and consumers carries the risk of “disparate impact”—unintentional discrimination. Employment and credit markets are of particular interest to the legal community because both are subject to regulations that prohibit discrimination by classes such as age, color, disability, genetic information, national origin, race, religion, sex, and veteran status. Consequently, federal regulators of these sectors have increased their technical capabilities and indicated an interest in AI-related enforcement to help protect employees and customers. If left unchecked, discrimination from the increasing use of AI technology to make decisions could result in substantial fines to companies, class-action lawsuits, and AI regulations. This article describes ways to identify, remedy, and reduce the potential for bias in AI applications, with a focus on employment and credit markets. We describe how the use of AI decision models by employers and lenders can lead to bias as well as techniques that could be used by regulators and litigators to identify disparate impact.