Key Takeaways
- UNC‑Chapel Hill leaders stress the urgent need to build an AI‑literate workforce that can adapt to rapidly changing technology.
- AI is already accelerating cancer detection, personalized treatment, and drug‑trial enrollment, but requires stronger data infrastructure and privacy safeguards.
- OpenAI’s chief economist warns that AI‑driven job displacement will have long‑term effects on wages and career trajectories, urging graduates to cultivate leadership and decision‑making skills.
- Anthropic’s Claude Mythos model has uncovered widespread security vulnerabilities, highlighting the geopolitical race to control powerful AI capabilities.
- North Carolina’s State Treasurer is using AI to parse municipal financial data, turning an overwhelming “fog” into predictive insights for taxpayers.
- Privacy officials caution that seemingly innocuous offers—like a free burrito—often conceal data‑collection practices that feed AI models, urging consumers to read the fine print.
- Academics and job‑seekers are looking for ways to leverage AI as a competitive edge, from automating routine tasks to enhancing personal productivity.
UNC Provost Calls for AI‑Ready Workforce
Provost Magnus Egerstedt highlighted one of the higher‑education sector’s biggest challenges: preparing students for a job market that evolves faster than curricula can keep up. “You’ve got to be able to use the tools,” Egerstedt said, adding that future professionals must be able “to see around corners, who can make new connections, who can keep a finger on the pulse of the future with very limited and oftentimes contradictory information.” He argued that baseline AI literacy—not just familiarity with generative chatbots—is essential for graduates to apply the technology responsibly in their chosen fields.
AI Accelerating Medical Diagnoses and Trials
Ashok Krishnamurthy, director of the Renaissance Computing Institute, explained how AI is already shortening cancer diagnosis timelines in North Carolina, where 41 % of colon‑cancer patients present only after an emergency‑department visit. “Rather than a manual process that can take months or years to diagnose, AI‑based tools will enable doctors to quickly evaluate clinical notes and have a diagnosis in hours or days,” he said. Melissa Haendel of the UNC School of Medicine added that AI can streamline clinical‑trial enrollment, a frequent bottleneck that delays new‑drug development, but stressed that “greater investments in data infrastructure and privacy protections” are prerequisites for these benefits to scale safely.
OpenAI Economist Warns of Labor‑Market Disruption
Ronnie Chatterji, OpenAI’s first chief economist and a Duke Fuqua professor, described his dual role: monitoring monthly job‑market statistics and advising stakeholders on how to navigate an AI‑shaped future. “One of my jobs is to watch those numbers every month and try to figure out how long the job market will look the way it does today,” Chatterji said. “The other piece of my job is a lot harder, which is trying to tell people what to do with the future.” He cited research showing that graduating during a recession or technology shock can depress earnings and career prospects for up to 15 years, and referenced a Challenger, Gray & Christmas report that identified AI as a top driver of U.S. job cuts in March, especially in tech sectors where AI can replace coding functions.
Anthropic’s Claude Mythos Raises Security Concerns
Anthropic’s forthcoming Claude Mythos model drew attention after the company disclosed that the preview had already uncovered “thousands of high‑severity vulnerabilities, including some in every major operating system and web browser.” In a blog post, Anthropic warned that, given the rapid pace of AI progress, such capabilities could soon spread beyond actors committed to safe deployment. Thompson Paine, Anthropic’s head of geopolitics, warned Chapel Hill attendees that if a rival nation achieved similar breakthroughs first, the United States might lose the chance to coordinate a global patch‑response: “Do you think they would have reached out to the U.S. government to figure out how we patch as much software as possible? … Do you think that they would have reached out to ten American companies to make sure that we’re covering as many vulnerabilities as possible before this technology goes public?”
State Treasurer Uses AI to Cut Through Financial Fog
State Treasurer Brad Briner described a pilot project that employed ChatGPT to sift through financial reports from more than 1,100 municipal governments across North Carolina. “It’s just so enormous, you can’t see through the fog,” Briner said. “We can see through the fog now. Now we can have a predictive ability to tell the citizenry, your municipality is doing this. We know how this ends.” He announced plans to expand AI use within his office to detect budgeting oversights—such as the recent inadequacy flagged in Rocky Mount—before they become costly problems for taxpayers.
Privacy Experts Warn About Hidden Data Traps in “Free” Offers
During a session on AI infrastructure and privacy, Chief Privacy Officer Martha Wewer posed a pointed question to panelists: what should users consider before clicking on an email promising a free Chipotle burrito? Ozgün Ataman, CTO of Well, responded that consumers must examine the fine print to determine whether a company is retaining personal data, for how long, and whether that data will be used to further train AI models. “You can choose to allow them to do that, but you should at least be aware of that choice,” Ataman said. DJ Sampath, senior vice president for AI software at Cisco, echoed the sentiment, noting that AI itself can help users digest those agreements: “That should give you an instant understanding of what am I giving away to be able to get that free burrito.” He concluded that privacy decisions hinge on understanding the trade‑off between convenience and data exposure.
Academics and Job Seekers Seek Competitive Edge in AI Era
The conference closed with a discussion on how North Carolina’s academics and job‑seekers can harness AI to gain an advantage. Panelists suggested that beyond automating routine tasks, individuals can use AI‑driven analytics to identify emerging skill gaps, tailor resumes to market demands, and even simulate interview scenarios. Sampath reminded attendees that while AI offers powerful productivity boosts, the ultimate responsibility for ethical use remains human: “From a privacy perspective, I think what becomes really, really important is you have to understand that trade‑off. What is okay by me, and what is not okay by me?” The consensus was clear—those who marry technical AI fluency with critical thinking and ethical awareness will be best positioned to thrive in an AI‑augmented economy.
The good, the bad and the unknown: The future of AI in North Carolina

