The Chaos at OpenAI

The news and chaos about Sam Altman's departure from OpenAI are too significant to ignore. We need to pause and consider the future of AI and what it means for AI development in medicine.

The argument between the board and Sam Altman is whether to prioritize commercialization or safety. Without financial support, AI development and adoption will slow down. Without safety alignment, AI, especially the forthcoming AGI, could harm human society. It seems the current compromised solution is to support both, but as separate teams: one led by Ilya Sutskever remains at OpenAI, and the other led by Sam Altman moves to Microsoft. However, concerns about OpenAI’s survival still remain.

When it comes to medical uses of AI, we believe safety is always the priority. If OpenAI falls apart due to the recent chaos, will we have a safe AI from Microsoft? How much of a safety guardrail will Microsoft implement for their future AI products?

In a pioneering book, The AI Revolution in Medicine: GPT-4 and Beyond, the authors fully agree that “the impending AI revolution in medicine can and must be regulated.” One of the authors, Peter Lee, is the VP of Microsoft Healthcare. In the book’s foreword, Sam Altman also wrote, “In particular, this book shows situations where GPT-4 may not always be accurate or reliable in generating text that reflects factual or ethical standards. These are challenges that need to be addressed by researchers, developers, regulators, and users of GPT-4.” So we tend to believe Microsoft will be responsible and cautious about the medical use of AI.

Nevertheless and fortunately, many foundation AI models important to medicine and drug development do not rely on GPT. For examples:

  • Researchers from the University of Florida and NVIDIA developed a generative large language model for medical research and healthcare. According to the authors, the model even passed physicians’ Turing test. The code and model weights are open-sourced.

  • Generate:Biomedicines recently announced their publication in Nature describing Chroma, a generative AI model that can design novel proteins. the company is also making the code and model weights freely accessible to academic researchers and non-profit entities.

Chroma, a programmable generative model for protein

##########

If you find the newsletter helpful, please consider:

  • 🔊 Sharing the newsletter with other people

  • 👍 Upvoting on Product Hunt

  • 📧 Sending any feedback, suggestions, and questions by directly replying to this email or writing reviews on Product Hunt

  • 🙏 Supporting us with a cup of coffee.

Thanks, and see you next time!

Join the conversation

or to participate.