Insights from the 2025 HealthNext AI Summit.
This week, Cornell Tech, Weill Cornell Medical College, and the New York Academy of Sciences — along with partners Grey Matter Capital and Flare Capital Partners — hosted the third annual HealthNext Summit. The event convened scientists, entrepreneurs, health system leaders, payors, and providers to address how the current wave of AI technology is challenging fundamental assumptions about care delivery, medical expertise, and human connection in healthcare — revealing a sector wrestling with profound transformations.
In formal sessions and side conversations across the two-day event, a few key themes emerged. Healthcare leaders are focused on the ways AI might enable more humane interactions, address scarcity, and lead to new kinds of partnerships. Increasingly, leaders are recognizing that the risk of inaction on AI may be as great or greater than any potential risks of implementation.
The humanization challenge
We know artificial intelligence is poised to transform healthcare, but the contours and implications of that transformation remain uncharted. Many of the discussions at the summit highlighted how AI applications could enhance provider-patient connection rather than replace the most human elements of care. This framing imagines a future where technology handles documentation, administrative tasks, and routine analysis in order to free up valuable time for clinicians to practice at their most human. Douglas Rushkoff cautioned the audience against the “quantization” potential of AI and implored the audience to preserve “that space between the quantized bits.” Doing so would allow for the maintenance of more human connection and interaction.
To fully realize the positive benefits, technologists and health providers will need to work together to identify real problems and design AI implementations to solve them, rather than starting with technology in search of a problem. In a compelling discussion among leaders of large health organizations, Memorial Sloan Kettering Cancer Center President Dr. Selwyn Vickers implored the audience to recognize that “AI is a tool, not a strategy.” By adopting this problem-first approach, healthcare leaders are less likely to get distracted by new shiny objects that can stand in the way of addressing genuine healthcare needs.
Ultimately, the largest challenge likely won’t be developing the technology, but designing systems that distribute responsibilities between humans and technology systems. The most successful implementations will be those that thoughtfully allocate tasks based on comparative advantage — assigning to AI what AI does best while preserving and enhancing the human and interpersonal elements that give healthcare its meaning and impact.
Moving from scarcity to abundance
Historically, healthcare has been built around the scarcity of clinical expertise — with payment systems, access protocols, and delivery models all designed to ration a limited resource.
One of the more disruptive implications of AI in healthcare would be the dramatic expansion of reliable and care-relevant medical knowledge that could transform care from a scarcity-based model to one of abundance. As AlleyCorp’s Dr. Alexi Nazem suggested, “diagnosis for a dollar” could become a reality, democratizing access to medical expertise that has historically been limited by geography, socioeconomics, and systemic barriers.
This potential shift raises profound questions about how clinicians’ roles will evolve, how payment models must adapt, and what structures need to change to capitalize on this new abundance while ensuring quality and safety. This is particularly true for some of areas where innovation is most needed but overlooked by traditional venture capital frameworks: areas with meaningful social value but limited commercial potential. Pediatric care, behavioral health, and preventive services are all examples of domains with meaningful social value but limited commercial potential that require different approaches to innovation and funding, even in the face of technology-driven disruption.
Current healthcare technologies, particularly the dominant electronic health record platforms, were primarily designed to support billing rather than clinical care or patient outcomes. The electronic medical record has had a detrimental impact on many core parts of the healthcare experience, leaving providers frustrated and adding new inefficiencies to an already taxed system. In his keynote address, Dr. Vickers aptly described electronic records as “a clunky billing system that happens to have healthcare data in it.” This billing-first approach has created fragmented systems that inhibit rather than enhance the delivery of care. Thoughtfully deployed, AI has the potential to reorganize healthcare around value rather than transactions.
Academia’s uncertain role in health AI
Across the two-day summit, health and tech leaders discussed the growing gap in access to computational resources between academic institutions and private industry. With many AI models requiring massive computing infrastructure, academic medical centers — traditionally at the forefront of research innovation — may find themselves unable to independently advance relevant AI implementations.
This scarcity of compute is driving new models of collaboration as academic institutions seek partnerships with technology companies and other private entities that possess the necessary computational capacity. However, these partnerships raise important questions about research priorities, intellectual property ownership, and the role that academia will play in the development of the next generation of AI models for health.
Even without access to cutting-edge technologies, some researchers are still benefiting from more readily available AI tools. One example mentioned across the summit is the use of large language models to accelerate the traditionally slow process of scientific discovery. Researchers with access to LLMs can now “virtually test hypotheses” against the existing body of literature before conducting physical experiments. This capability could fundamentally change how research questions are prioritized and pursued, potentially compressing the time required for what are now lengthy investigations.
The academic-industrial divide in computing power could potentially reshape how medical research advances, with profound implications for what problems get solved, who controls the resulting technology, and how equitably its benefits are distributed. Building models that preserve academic independence and public benefit while leveraging private computational resources represents one of healthcare’s most pressing innovation challenges.
Balancing risks and benefits
When evaluating AI technologies, healthcare leaders must consider not just the risks of implementation but also the costs of inaction. In areas facing overwhelming provider shortages and access barriers, carefully implemented AI solutions could provide meaningful progress compared with the status quo.
This balanced approach to risk assessment requires considering the full context of healthcare needs and limitations, rather than evaluating new technologies in isolation. The question becomes not whether AI solutions are perfect but whether they improve upon current realities.
Photo by Charles Parker