According to Gartner Research, from now until the year 2022, 85 percent of AI projects will deliver inaccurate outcomes due to bias in data, algorithms, or the teams responsible for handling them. Moreover, 85 percent of Americans currently use at least one AI-powered device, program, or service, the Gallup polling firm reports. As a matter of fact, the people program and teach AI systems , so those AI systems will tend to have the natural biases of the people who teach them. Usage of the cloud to host expensive AI systems is actually making things worse because the number of companies that can afford AI has increased but the number of people with good AI skills hasn’t grown at the same pace. Apart from the ubiquity of natural bias in AI tools, the lack of talent also sums up to more mistakes in how the developing knowledge-bases are going to be common for some time. What do these biases look like? Women may find that they are getting the short end of the stick. That’s because men do the majority of AI development and teaching, so their conscious or unconscious biases get encoded. For example, despite the fact that 27 percent of CEOs in US are females, a study in 2015 showed that in a Google images’ search for ‘CEO’–only 11 percent of the people it displayed were women. Companies will have to pay for these built-in AI biases. For example, they will need to absorb the profit hit of not writing loans to enough women, who comprise about 55 percent of the market. What can be done about this? The reality is that biased AI systems are more the norm than the exception. So IT needs to recognize that the biases exist, or may exist, and take steps to limit the damage. Fortunately, tools are emerging to help you spot AI-based biases. The article first appeared on InfoWorld .