AI innovators have already been told of the risks their chatbots bring, from potentially costing millions of people their jobs to “hallucinating” false information when fielding user queries. But a more immediate risk could be the prejudices that such software reinforces: According to University of Washington (UW) researchers, the best-known model, OpenAI’s ChatGPT, appears to be biased against people with disabilities, going by how it ranks resumes. The Seattle-based team found ChatGPT to have “consistently ranked resumes with disability-related honours and credentials” as “lower than the sam…