Sunday, July 23, 2023

 Keeping an Eye on AI

Recent announcements and developments in the field of Generative AI has triggered a race to “AI first” systems. Within 2 months over 100 million users rushed to experiment with chat GPT and all large product organizations have been making announcements about new products, use cases and LLMs. This creates a ripple effect for business and operations for faster adoption and especially fear of missing out. What also need to evolve at same speed and urgency is local and global governance for these AI systems and use cases at both technological as well as ethical level. While entire concept of AI is being independent in creating, learning, and taking decision, we have sufficient and significant examples to indicate that we need to keep an eye on AI systems and design them in such a way that enables monitoring of various key parameters like efficacy, bias, usage of data and adherence to various legal frameworks.


Recent announcements and developments in the field of Generative AI has triggered a race to “AI first” systems. Within 2 months over 100 million users rushed to experiment with chat GPT and all large product organizations have been making announcements about new products, use cases and LLMs. This creates a ripple effect for business and operations for faster adoption and especially fear of missing out. What also need to evolve at same speed and urgency is local and global governance for these AI systems and use cases at both technological as well as ethical level. While entire concept of AI is being independent in creating, learning, and taking decision, we have sufficient and significant examples to indicate that we need to keep an eye on AI systems and design them in such a way that enables monitoring of various key parameters like efficacy, bias, usage of data and adherence to various legal frameworks.

 

•             Hallucinations handling :  Its not unlikely for humans to speculate or make statements based on assumptions. With human like creativity its obvious that AI has also learnt to provide made up answers when it doesn’t have needed facts and data on the topic, this is called hallucination. In near future there will be more development to handle hallucination, but two things are key here first of all to train AI model with as much data as possible and second to provide users a feedback options so that they can report an hallucination of the system to owners for corrective actions and interventions.

 

•             Watermarking:  With AI creating human like artifacts its becoming increasingly important to differentiate what’s generated by AI and what’s not hence watermarking the content created by AI should be one of the basic and standard principles.

 

 

•             Sustainability:  In an interview in Q1 2023 Sam Altman, Open AI CEO had mentioned that cost of cloud infrastructure training and running chat GPT is eye watering. Open AI and all such organizations are using thousands of chips and other hardware (directly or indirectly) as well as energy which have significant and probably equally eye watering impact to environment ranging from carbon footprint to e waste generated with out of use hardware. This makes its critical that all AI use cases in business and IT consider possible impact on sustainability vs the benefit such AI systems will generate. A technological eco system need to be evolved which can help us in better review and decision making of environmental impact of any It system.

 

•             Performance Monitoring:  At the current fast paced AI adoption into business at times the ultimate business functionality is of paramount importance but with that we also need a comprehensive approach for performance monitoring and benchmarking tailored to each system. While its good to rely on product vendors promises, we need to embed our own performance review and benchmarking which will lead to overall efficacy improvement for system and models.

 

•             Human in loop: At the early stage of AI adoption for any system and process we must keep human in loop to accept or reject the outcome generated by AI. This will help in setting up accountability on the team or organization for the decisions taken by AI .

 

•             Transparency: We discussed watermarking in this document but that’s invisible to human eyes and mostly detected by system whereas its critical that all users know and understand what part of their decision or outcome they have been handed over is coming from AI hence systems need to be designed for giving clear warnings and disclaimer to business users and also how has that been arrived  

 

 

 

 

 

 

 





.

No comments: