Responsible AI and ML ethics: the beginning

  • by

I was honored to participate in Ignite 2020 as an attendee and an MC for one of the Table Talks. Microsoft moved the event online and added new Community Zone “sessions” – Table Talks. Bruno Capuano, Matthew Renze, Ivana Tilca and I were able to discuss a very important topic “The promise of AI and ML – Will it change the world?” with the attendees. I think it was great hearing different opinions and stories and read comments in the chat.

We talked a lot about data, processing it before passing to ML and responsible AI in general. As Mitra Azizirad said in her keynote: “We believe the full potential of AI can only be realized when it’s put in everyone’s hands making it accessible and approachable…”. You can watch the recording here: https://www.youtube.com/watch?v=J0GJI8_REt4 or check it along with other awesome sessions on https://myignite.microsoft.com website. It’s essential to invite people with different backgrounds, ethnicities and knowledge to be involved in ML development; that way, we’ll reduce bias and add more inclusivity. But we need to make sure no one there has malicious intent. It’s always about balance. When people say, AI went out of control, that only means the data they used wasn’t cleaned and normalized or the new data feed is coming from an untrusted source. ML models are always working by utilizing the principle “garbage in – garbage out”. I often remind people about bot Tay that was created by Microsoft and was learning from Twitter. It began to post inflammatory and offensive tweets through its Twitter account within several hours, causing Microsoft to shut down the service only 16 hours after its launch. Thanks to that experiment, we know Twitter (and other social media) is not the right place for letting your bot freely learn from. So, how to make sure your ML model is not becoming an uncontrollable monster?

Tay AI Tay AI

Microsoft came up with the Responsible AI – a set of guidelines and tools for providing AI development governance: https://www.microsoft.com/en-us/ai/responsible-ai. I know Google offers a similar option, but I didn’t have a chance to go through it in detail, so let me know if you did. All users: both developers and end-users agree to use the AI tools provided by Microsoft in a way that follows the guidelines of Responsible AI. Other than reading the guidelines, having a diverse team and cleaning your data, you can use available tools to make sure your ML model is moving in the right direction:

Microsoft Responsible AI Microsoft Responsible AI

  •          You need to understand how AI works in general and in your specific case, what pros and cons you have and what can be done to minimize possible negative impact. To help you with that understanding, in addition to research and investigation before creating a model, logging during usage and making sure it’s trained on relevant data, you can use tools such as InterpretML (https://interpret.ml/) and Fairlearn (https://fairlearn.github.io/)
  •          Data used for ML, as well as the models themselves, should be protected. Anderson Nascimento Nunes in the Tech Community Table Topic (great conversation going on here: https://techcommunity.microsoft.com/t5/data-ai/the-promise-of-ai-and-ml-will-it-change-the-world/m-p/1684461) suggested researchers and developers who work with AI learn more about cybersecurity and understand how it can be applied to real-life scenarios. There’s also a way to create an ML model using encrypted data that doesn’t need to be decrypted for the training. It’s generally referred to as “secure computation” and is a relatively large area of research. There’s an exciting tool created by Microsoft: https://www.microsoft.com/en-us/research/project/ezpc-easy-secure-multi-party-computation/. I’m sure there’re other options available, but it’s just a good example to check out.
  •       Creating your ML model is just the beginning. It would help if your model is treated as any other software: use MLOps(DevOps) for deployment, log errors, failures and warnings, update, retrain and redeploy the model, so it’s using the latest data. According to official Microsoft documentation: “MLOps, or DevOps for machine learning, enables data science and IT teams to collaborate and increase the pace of model development and deployment via monitoring, validation, and governance of machine learning models.” If you’re using Azure ML you have access to MLOps by default; for ML.NET there’s a brand new community open-source Ops solution: https://github.com/aslotte/MLOps.NET. You can also find more MLOps resources provided by Microsoft here: https://github.com/Microsoft/MLOps.  

 Another impressive Microsoft toolkit is going to help solution architects of innovative solutions; it’s called “Responsible Innovation”: https://docs.microsoft.com/en-us/azure/architecture/guide/responsible-innovation/. Microsoft currently released it as experimental and you have a chance to provide feedback and suggest updates.

Before I wrap up this post, I want to share the latest blog post by Eric Boyd – Microsoft’s Corporate Vice President, Azure AI, where he covers responsible usage of AI for spatial analysis can be found here: https://azure.microsoft.com/en-us/blog/build-powerful-and-responsible-ai-solutions-with-azure/.

Microsoft Ignite Spatial Analysis demo Microsoft Ignite Spatial Analysis demo

Topics of Responsible AI and ML ethics are endless: developers, researchers, organizations, community activists are discussing them and continuously figuring out the approach. I hope the Table Talk, Table Topic at Ignite and a quick overview here are useful and spark your interest in those topics. As always, feel free to contact me through social media if you want to discuss it or provide your feedback or you can also leave your comment in the Tech Community forum and discuss it with other ML enthusiasts from all over the world.