Yao Yang is a journalist turned machine learning researcher/developer. She leads the R&D at Accenture Tech Labs to create a new platform that checks bias in data and mediate the biases in models to present a fair outcome for use. This includes processing and verifying data from various sources, recognizing opportunities, and developing techniques for programmatically monitoring and enhancing trust through model development and execution.
Links mentioned in this episode:
Stanford paper mentioned on word embedding: https://www.pnas.org/content/pnas/115/16/E3635.full.pdf
Learn more about AI fairness and bias in this ebook: http://ibm.biz/BdqMvS