Responsible AI Institute (RAI Institute), a leading non-profit building AI Assessments and a Certification program dedicated to converting responsible AI principles into practice, announced the launch of the first Responsible AI Consortium in a series of consortia comprised of leading corporations, technology providers and experts from global universities. This inaugural consortium, focused on healthcare, is aimed at accelerating the responsible development and use of generative AI technologies through collective learning, experimentation and policy advocacy and is built around a unique hands-on responsible generative AI testbed to enable members to actively experiment with and refine responsible generative technologies in a real-world healthcare context.
A diverse portfolio of distinguished experts across the NHS, Harvard Business School, Turing Institute, St Edmund’s College at University of Cambridge and industry partners including Trustwise have allowed unique knowledge-sharing across the AI value chain across academia, policy makers, investors and healthcare providers.
“We are in the middle of rapid advancements and adoption of generative AI and navigating the responsible AI landscape is proving to be a formidable challenge for all,” said Manoj Saxena, Founder and Chairman at RAI Institute. “Now, more than ever, we need to work together to make AI safe and aligned with human values. The creation of our Responsible Generative AI Consortium, with its practical testbeds and GenAI Safety Ratings, is a vital step towards our mission of helping AI practitioners to build, buy and supply safe and trusted AI systems.”
The Healthcare Responsible GenAI Consortium: Shaping Healthier Futures with Trustworthy AI
The Responsible GenAI Consortium in Healthcare is the first in a series of industry-specific consortia and GenAI testbeds to be launched by The Responsible AI Institute, with others being planned for later this year and next. The testbeds will incorporate the new Generative AI Safety Rating — a scoring system that grades the safety and reliability of generative AI systems, providing organizations, AI developers, policymakers, investors and other stakeholders a clear measure of system performance, fairness and compliance with rules and regulations.
The rating is based on the evaluation of various criteria and metrics including but not limited to bias detection, model hallucination, IP and privacy protection, transparency and accountability. Similar to the FICO credit score model, the numerical representation of the Generative AI Safety Rating will help guide improvements, enable progress tracking and foster a culture of responsible AI development and deployment.
The Responsible Generative AI testbeds are an integral part of Responsible AI Institute’s broader mission to promote the adoption of trustworthy AI by driving the following impacts:
The Responsible AI Consortium will serve as a hub for knowledge sharing and resource pooling, allowing individuals and organizations to learn from one another. Activities will include hosting workshops, conferences, webinars, as well as developing educational resources. It will also facilitate cutting-edge research, sharing of case studies and executive education programs in responsible adoption of generative AI.
By providing a live and open generative AI testbed with independent and standards aligned Generative AI Safety Ratings for organizations and individuals, the consortium will encourage a more robust and diverse testing ground for new ideas and experiments in the field of generative AI. The consortium will enable corporations, researchers, policy makers, investors and individuals to work together on novel generative AI use cases and facilitate access to data sets, computational resources, open-source communities and testing platforms.
The consortium will provide expert insights to policymakers, regulators and investors, helping them make informed decisions about laws and shape regulations that both promote ethical use of AI and are conducive to sustainable AI innovation. It will raise awareness about responsible generative AI at various levels, from grassroots community organizations to national and international policy forums and create informational campaigns, engage media and policymakers, and act as a unified voice for its members, amplifying their concerns and suggestions for policymakers and sustainability focused investors in public debates.
Building Towards a Future Where Responsible AI Become the Norm Across All Industries
As AI continues to evolve, it is becoming imperative to shift from an algorithm-based AI design mindset to one that aligns with and fulfills the values and sustainability goals of humans. The Responsible AI Consortium and its testbeds will play a critical role in this evolution of Foundation Models to a human-centric AI design by providing an interactive learning, experimentation and advocacy environment where Consortium participants can collaborate and apply the principles of responsible AI to real-world use cases and problems.
“Generative AI brings a unique set of promises and perils, and it’s advancing faster than previous AI technologies. As its development progresses, there is a pressing need for AI systems that empower human beings and promote equity,” said Dr. Satish Tadikonda, Senior Lecturer of Entrepreneurial Management at Harvard Business School. “The Responsible AI Consortium was created to allow everyone — employees, companies, individual consumers and even society at large — to be able to trust and scale AI with confidence and take ownership in shaping the future of responsible AI.”
With its upcoming series of responsible AI consortia and companion testbeds, RAI Institute and its partners hope to establish and distribute useful roadmaps, architectures, white papers and tools to provide a range of benefits for key roles across industries such as:
- Enterprise AI system developers can leverage these best practices to enable safe and responsible deployment of generative AI models in production and minimize model errors and data commingling. Developers will become empowered to create more accurate and efficient models without compromising safety and compliance and make better-informed decisions about developing, deploying and scoring the safety of generative AI models.
- Policymakers can manage generative AI and its surrounding policies more efficiently to ensure the protection of consumers and promote the responsible use of AI. The testbeds also offer a platform on which they can collaborate with other experts in their respective fields as well as share knowledge and best practices.
- Technology vendors, platforms and tool providers can become members of the consortium and access the testbeds to experiment, demo, test and validate their software with generative AI use cases and access assessments to support their products and offerings.
- The general public can learn about the benefits and potential drawbacks of generative AI and provide user feedback that will help shape the future development of generative AI. This will also provide a platform for people to participate in public discussions about the use of generative AI as well as advocate for and support regulations that prioritize transparency, accountability and equity.
The Symposium on Responsible Generative AI in Healthcare features multiple panels with experts from leading global healthcare and life sciences companies, academia, venture and private equity, and technology vendors discussing today’s most pressing generative AI-related challenges. These include:
- Role of ethics, accountability, and leadership in navigating the generative AI era
- Lessons from the frontlines in putting trustworthy generative AI to work
- Strategies for leadership development and capacity building
- Business model and monetization strategies for responsible commercialization of generative AI
Sign up for the free insideBIGDATA newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideBIGDATANOW