Introduction
This line stayed with me after our interview with Elidor, a Senior LLM Engineer at Ralabs.
Elidor has worked across multiple domains – from safety systems to betting platforms, and now healthcare. That range gives him a sharp perspective on how artificial intelligence is being used in medical settings, what challenges it faces, and where it’s heading next.
So we sat down to talk about AI in medicine – what’s working, what’s not, and why progress in this field isn’t moving as fast as in others.
Healthcare Demands More Than Just Smart Models
That’s where our conversation began – with a clear and non-negotiable truth. A system that gets it right most of the time might be fine in e-commerce or logistics. In healthcare, it could affect a life.
The weight of that trust is what separates healthcare from other fields. And the path to achieving it is complicated, especially when it comes to data. Unlike other industries, healthcare data isn’t freely available. It’s highly regulated, deeply private, and often siloed. Public datasets are rare, and when they do exist, they’re often unbalanced – skewed toward certain populations.
AI models reflect the data they’re trained on. They’re not biased by design – they inherit the patterns in the data. To Elidor, precision in healthcare AI is about far more than model size or performance benchmarks. It’s about building systems that won’t generalize recklessly. He compares it to what we see in computer vision: narrow, specialized models designed to do one thing well – like detecting defects in steel or identifying tumors in scans.
Without the right data – diverse, representative, and clean – even the most advanced systems won’t be enough. And in healthcare, “falling short” can’t be part of the plan.
Building Healthcare AI? Precision Starts with the Right Partner.
Ralabs works with companies navigating high-stakes, high-compliance environments – from LLM integration to full-scale healthcare systems.
CEO and Co-Founder at Ralabs
Why Smarter Isn’t Always Safer
Today’s AI tools are faster, more flexible, and more powerful than ever. But in healthcare, raw capability isn’t enough. Tools that perform well in one setting can still misfire when stakes are high.
One of the most pressing risks he highlights is hallucination – when an AI generates information that sounds correct, but isn’t grounded in fact. In everyday use, that might mean a confused chatbot. In healthcare, it could mean a false diagnosis or a suggested treatment that does more harm than good.
The risk increases when working with large, complex documents. AI-powered retrieval tools are becoming common, but they bring their own problems – especially when systems summarize or reframe medical records.
He points to chunking – the common method of breaking long documents into pieces for analysis – as one of the sources of potential error. Important context can get lost between chunks, or the model might link unrelated ideas, creating a false impression of a patient’s condition.
Elidor’s view is clear: AI should support decisions, not make them. Responsibility must stay with the professionals – the people trained to evaluate nuance, understand the limitations, and act on what the AI suggests.
Despite all this, he doesn’t view AI as a threat. What he pushes back against is the illusion of reliability – the assumption that high performance in one domain means high performance everywhere.
AI as a Tool, Not a Decision-Maker
With so many risks on the table, it’s fair to ask: why are healthcare companies still pushing forward with AI? If every output needs to be verified, isn’t it just adding more complexity? Not necessarily.
The real value of AI in healthcare today isn’t in replacing decisions – it’s in retrieving data, streamlining workflows, and freeing up time for professionals. Many of these tools are used not for diagnosis, but for administrative and support tasks: pulling patient records, organizing documents, assisting with communication between staff.
These are the kinds of improvements that don’t often make headlines, but they make a real difference. Faster access to the right files can mean faster patient responses. Structured data can help doctors prepare better for each case. And support staff can focus on care, not just paperwork.
Even in early-stage diagnostic support tools, Elidor stresses the importance of expert oversight. Whether it’s a symptom checker or a triage assistant, the system should always be treated as a recommendation engine – not as a standalone authority.
Innovation Within Boundaries: AI and the Role of Regulation
AI in healthcare doesn’t move fast – and there’s a reason for that. Accessing and working with sensitive data isn’t just a technical problem. It’s a matter of law, ethics, and trust. Frameworks like HIPAA in the U.S. and GDPR in Europe define what’s allowed – and what isn’t.
Elidor has worked under both.
He was involved in projects where GDPR compliance shaped every decision. He recalls how even basic elements – like using location data, browsing patterns, or chat logs – triggered long internal reviews and approval processes.
That friction, he notes, isn’t there to stop progress – it’s there to slow it down on purpose, to make sure data isn’t misused. But that also means experimentation becomes more costly, more resource-heavy, and slower to execute – especially when compared to countries with looser restrictions.
He’s careful not to frame this as a complaint. Instead, it’s a reality check on what responsible AI development actually looks like in regulated sectors. It’s slower, it’s more complex – and that’s part of what makes it safe.
At Ralabs, these considerations are built into every project from day one. Especially in healthcare, teams work closely with legal, security, and compliance experts.
To him, the challenge isn’t whether we need regulation. It’s how we innovate within it – and how we build systems that are not only effective, but also respectful of the people they’re designed to help.
Looking Ahead: Smarter Tools, Human Decisions
When asked what the next five years of AI in healthcare might look like, Elidor doesn’t talk about replacing doctors or automating diagnoses. His vision is more grounded – and more powerful because of it.
That future won’t rely on a single, all-knowing system. Instead, he believes it will be built from smaller, highly specialized models – each designed to solve a very specific problem. From scanning and diagnosis to data processing and support tools, precision will depend on focus.
One area he’s especially optimistic about is the rise of multimodal large language models – systems that can understand and combine text, images, and speech in the same context. These tools could radically improve how patient information is retrieved, interpreted, and used.
He also sees opportunities in scan enhancement and automation – especially in fields like radiology, where current instruments still rely heavily on human interpretation and carry risks, including radiation exposure. AI could help improve scan clarity, reduce false positives, or even suggest additional views when data looks unclear.
Still, Elidor is clear that even with these advances, AI won’t replace expertise. If anything, the more powerful the tools get, the more important expert oversight becomes.
He says what many in the field are thinking: even if some engineering tasks are automated, high-stakes healthcare decisions will remain in human hands.
His takeaway is simple, but firm: AI is here to help us do our jobs better – to surface the right information, highlight patterns, and remove the noise.