Advertisement · 728 × 90
#
Hashtag
#ehrs
Advertisement · 728 × 90
Preview
#Medication-Based Severity Stratification in Psoriasis Using #EHRs Date Submitted: Mar 14, 2026. Open Peer Review Period: Mar 24, 2026 - May 19, 2026.

Reminder>> #Medication-Based Severity Stratification in Psoriasis Using #EHRs (preprint) #openscience #PeerReviewMe #PlanP

0 0 0 0
Preview
A Sentence Classification–Based #medical Status Extraction Pipeline for #ehrs: Institutional Case Study Background: Clinical data warehouses store large volumes of unstructured text containing valuable information about #patients’ #medical status. Traditional extraction systems based on named entity recognition (NER) identify #medical terms but often fail to capture the contextual cues needed for accurate interpretation. Existing approaches to context-aware extraction differ in their reliance on expert annotation, computational power, and lexical resources, leading to uneven feasibility across institutions. Combined with heterogeneity in documentation practices and data-sharing restrictions, these limitations hinder the scalability and reuse of trained models. There is thus a need for practical frameworks that can be deployed and adapted locally within #medical institutions. Objective: This study aimed to introduce the #medical Status Extraction Pipeline (MSEP) a methodological framework that extracts #patients’ #medical status from clinical narratives through sentence classification and supports the local deployment of hybrid extractors, illustrated through an institutional case study. Methods: MSEP extracts #medical status by classifying sentences into predefined categories (presence absence, or unknown) for each targeted condition. The pipeline combines modules for data selection, expert annotation, and model development, with parameters customizable to different settings. It was applied within our institutional environment on 6 conditions: smoking, hypertension, diabetes, heart failure, chronic obstructive pulmonary disease, and family history of cancer, using 12,119 manually annotated sentences from the eHOP Clinical Data Warehouse (Rennes University Hospital). Three types of extractors were compared: fine-tuned CamemBERT, large language model (LLM) prompt, and a rule-based baseline, evaluated through stratified 3-fold cross-validation, measuring precision, recall, specificity, macro -score, balanced accuracy, as well as manual annotation time and model inference speed. Results: Among the tested approaches, the CamemBERT-based extractor achieved the best overall performance, with macro -scores above 0.94 for 5 of the 6 #medical conditions. The study also highlights that when a #medical status is very sparsely represented in the training data, rule-based extractors can outperform learned models (average macro -score 0.94 vs 0.73 for family history of cancer). This shows the pragmatic value of choosing the extraction method according to data availability. Manual annotation time per sentence ranged from 1.2 to 2.9 seconds within the pipeline (2.23 to 4.25 seconds for informative sentences), compared with 7.8 to 16.5 seconds for named entity recognition–based systems. In our institutional experiments, the minimum time to complete all pipeline modules, from dataset construction to final extractor refinement, was 8 hours. Conclusions: In our institutional case study, MSEP enabled rapid construction of datasets and extractors across multiple clinical conditions while reducing the effort required for local development. Its modular and configurable design allowed the adoption of hybrid extraction approaches and adaptation to different resource settings. These features highlight MSEP’s value as a #research tool and upstream component that facilitates local deployment of clinical information extraction workflows.

New JMIR MedInform: A Sentence Classification–Based #medical Status Extraction Pipeline for #ehrs: Institutional Case Study

1 0 0 0
Preview
#Medication-Based Severity Stratification in Psoriasis Using #EHRs Date Submitted: Mar 14, 2026. Open Peer Review Period: Mar 24, 2026 - May 19, 2026.

#Medication-Based Severity Stratification in Psoriasis Using #EHRs (preprint) #openscience #PeerReviewMe #PlanP

0 0 0 0
Preview
How words discredit: A taxonomy of stigmatizing language in the electronic health record Language in electronic health records (EHRs) can transmit stigma, discrediting patients in ways that undermine the clinician-patient relationship and …

Stigmatizing language in #EHRs isn't limited to discrete terms but is embedded in linguistic practices that shape how #patients are represented & understood, particularly those describing how they fail to align w/clinical expectations www.sciencedirect.com/science/arti...

#MedSky #PatientExperience

12 11 0 1
Awakari App

AHA urges ONC to slow health IT certification overhaul The American Hospital Association is urging federal health IT officials to scale back and slow portions of a proposed interoperability overhau...

#EHRs #/ #Interoperability #Health #IT

Origin | Interest | Match

0 0 0 0
Awakari App

Sutter to embed AI search engine into Epic EHR Sacramento, Calif.-based Sutter Health is partnering with OpenEvidence, an AI-powered medical search engine, to integrate real-time medical search cap...

#EHRs #/ #Interoperability #Health #IT

Origin | Interest | Match

0 0 0 0

Is there a health tech / digital health community on here?

If so, if anyone has been to the annual conference / HQ of EHR provider Epic Systems in Wisconsin, can you drop me a line for an off the record chat please?

Thanks

#journorequest #EHRs #Epic #healthtech #digitalhealth

0 0 0 0
Preview
Linking #ehrs for Multiple Sclerosis #research: Comparative Study of Deterministic, Probabilistic, and Machine Learning Linkage Methods Background: Data linkage in pharmacoepidemiological #research is commonly employed to ascertain exposure and outcomes, or to obtain more information about confounding variables. However, to protect #patient confidentiality usually unique #patient identifiers are not provided; thus, makes data linkage between various sources challenging. The Saudi Real-Evidence #researches Network (RERN) aggregates #ehrs from various hospitals, which may require a robust linkage technique. Objective: To evaluate and compare the performance of deterministic, probabilistic, and machine learning approaches for linking de-identified multiple sclerosis (MS) #patient data from the RERN and Ministry of National Guard #health Affairs (MNGHA) EHR systems. Methods: We applied a simulation-based validation framework before linking real-world data sources. Deterministic linkage was based on predefined rules, while probabilistic linkage was based on a similarity-score matching. We applied both similarity-score and classification approach in machine learning¬¬¬¬— models including neural networks (NN), logistic regression (LR), and random forest (RF). Performance of each approach was assessed using confusion matrix focusing on sensitivity, positive predictive value (PPV), F1-score, and computational efficiency. Results: The study included linked data of 2,247 MS #patients (spanning from 2016 to 2023). The deterministic approach resulted in an average F1-score of 97.2% in the simulation and demonstrated varying match rates in real-work linkage: 1,046 out of 2,247 (46.6%) to 1,946 out of 2,247 (86.6%). This linkage was computationally efficient with a run time of

New JMIR MedInform: Linking #ehrs for Multiple Sclerosis #research: Comparative Study of Deterministic, Probabilistic, and Machine Learning Linkage Methods

0 0 0 0
Awakari App

Best in KLAS 2026: Who’s winning in ambient AI, EHRs, revenue cycle and more KLAS Research released its annual “Best in KLAS” report Feb. 4, ranking healthcare technology vendors and service ...

#EHRs #/ #Interoperability #Health #IT

Origin | Interest | Match

0 0 0 0
Preview
Developing a Multimodal Screening Algorithm for Mild Cognitive Impairment and Early Dementia in Home Health Care: #Protocol for a Cross-Sectional Case-Control #Study Using Speech Analysis, Large Language Models, and Electronic Health Record #ehrs Background: Mild cognitive impairment and early dementia (MCI-ED) are frequently unrecognized in routine care, particularly in home health care (HHC), where clinical decisions are made under time constraints and cognitive status may be incompletely documented. Federally mandated HHC assessments, such as the Outcome and Assessment Information Set (OASIS), capture health and functional status but may miss subtle early cognitive changes. Speech, language, and interactional patterns during routine patient-nurse communication, together with information embedded in unstructured clinical notes, may provide complementary signals for earlier identification. Objective: This #Protocol describes the development and evaluation of a multimodal screening approach for identifying MCI-ED in HHC by integrating (1) speech and interaction features from routine patient-nurse encounters (verbal communication), (2) large language model–based extraction of MCI-ED–related information from HHC notes and encounter transcripts, and (3) structured variables from OASIS. Methods: This ongoing cross-sectional case-control #Study is being conducted in collaboration with VNS Health (formerly Visiting Nurse Service of New York). Eligible participants are adults aged ≥60 years receiving HHC services. Case/control assignment uses a 2-stage process: Electronic Health Record #ehr (EHR) prescreening followed by clinician-reviewed cognitive assessment (Montreal Cognitive Assessment and Clinical Dementia Rating) for consented participants without an existing mild cognitive impairment diagnosis. For Aim 1, each participant contributes 3 audio-recorded routine patient-nurse encounters linked to EHR data, including OASIS and free-text clinical notes. Aim 1 extracts acoustic, linguistic, emotional, and interactional features from patient-nurse verbal communication. Aim 2 uses a schema-guided large language model pipeline to extract and normalize MCI-ED–related symptoms, lifestyle risk factors, and communication deficits from HHC notes and encounter transcripts, supported by a human-annotated gold-standard dataset. Aim 3 integrates speech, extracted text variables, and OASIS predictors using supervised machine learning with stratified nested cross-validation; evaluation will include discrimination, calibration, and subgroup performance checks across race, sex, and age. Results: Between February 2024 and July 2025, a total of 114 HHC patients completed #Study-administered cognitive assessments and were classified as 55 MCI-ED cases and 59 cognitively normal controls. Audio-recorded patient-nurse encounters had a median duration of 19 (IQR 12-23) minutes and a median of 56 (IQR 31-80) utterances per encounter; nurses contributed more words than patients (median 842, IQR 461-1218 vs median 589, IQR 303-960). In exploratory feasibility analyses, multimodal models integrating speech, interactional features, and structured EHR/OASIS variables outperformed single-source models. Conclusions: This #Protocol describes a reproducible multimodal framework for MCI-ED screening in HHC using routinely generated data streams. Initial implementation results support feasibility of data collection and end-to-end processing and suggest potential value of integrating interactional speech features with clinical text and OASIS variables. Final model evaluation, subgroup analyses, and validation will follow the prespecified analytic procedures on the finalized #Study dataset.

JMIR Res Protocols: Developing a Multimodal Screening Algorithm for Mild Cognitive Impairment and Early Dementia in Home Health Care: #Protocol for a Cross-Sectional Case-Control #Study Using Speech Analysis, Large Language Models, and Electronic Health Record #ehrs

0 0 0 0
Awakari App

Healthcare IT Managed Services Revisited: A New Look at Value Propositions for 2026 Too many healthcare organizations have support teams focused on break-fix, ongoing maintenance and other keeping ...

#EHRs #/ #Interoperability #Health #IT

Origin | Interest | Match

1 0 0 0
Natural Language Processing for #EHRs in Scandinavian Languages: Norwegian, Swedish, and Danish Date Submitted: Jan 21, 2026. Open Peer Review Period: Jan 22, 2026 - Mar 19, 2026.

Reminder>> Natural Language Processing for #EHRs in Scandinavian Languages: Norwegian, Swedish, and Danish (preprint) #openscience #PeerReviewMe #PlanP

0 0 0 0
Natural Language Processing for #EHRs in Scandinavian Languages: Norwegian, Swedish, and Danish Date Submitted: Jan 21, 2026. Open Peer Review Period: Jan 22, 2026 - Mar 19, 2026.

Natural Language Processing for #EHRs in Scandinavian Languages: Norwegian, Swedish, and Danish (preprint) #openscience #PeerReviewMe #PlanP

1 0 0 0
Preview
Machine Learning Prediction of Pharmacogenetic Testing Uptake Among Opioid-Prescribed #patients Using #ehrs: Retrospective Cohort Study Background: Opioids are a widely prescribed class of medication for pain management. However, they have variable efficacy and adverse effects among #patients, due to complex interplay between biological and clinical factors. Pharmacogenetic (PGx) testing can be utilized to match #patients’ genetic profiles to individualize opioid therapy, improving pain relief and reducing the risk of adverse effects. Despite its potential, PGx uptake (utilization of PGx testing) remains low due to a range of barriers at the #patient, #health care provider, infrastructure, and financial levels. Since testing typically involves a shared decision between the provider and #patient, predicting likelihood of #patient undergoing PGx testing and understanding the factors influencing that decision can help optimize resource use and improve outcomes in pain management. Objective: To develop machine learning (ML) models, identifying #patients’ likelihood of PGx uptake based on their demographics, clinical variables, medication use, and social determinants of #health (SDoH). Methods: We utilized #ehr (EHR) data from a single center #healthcare system to identify #patients prescribed opioids. We extracted #patients’ demographics, clinical variables, medication use, and SDoH, and developed and validated ML models, including neural networks (NN), logistic regression (LR), random forests (RF), gradient boosting (XGB), naïve bayes (NB), and support vector machines (SVM) for PGx uptake prediction based on procedure codes. We performed 5-fold cross validation (CV) and created an ensemble probability-based classifier using the best-performing ML models for PGx uptake prediction. Various performance metrics, uptake stratification analysis, and feature importance analysis were employed to evaluate the performance of the models. Results: The ensemble model using XGB and SVM-RBF classifiers had the highest C-statistics at 79.61%, followed by XGB (78.94%), and NN (78.05%). While XGB was the best-performing model, the ensemble model achieved a high accuracy (67.38%), recall (76.50%), specificity (67.25%), and negative predictive value (99.49%). The uptake stratification analysis using the ensemble model indicated that it can effectively distinguish across uptake probability deciles, where those in the higher strata are more likely to undergo PGx in real-world (6.59% in the highest decile compared to 0.12% in the lowest). Furthermore, SHAP value analysis using the XGB model indicated age, hypertension, and household income as the most influential factors for PGx uptake prediction. Conclusions: The proposed ensemble model demonstrated a high performance in PGx uptake prediction among #patients using opioids for pain. This model can be utilized as a decision support tool, assisting clinicians in identifying #patients’ likelihood of PGx uptake and guiding appropriate decision-making.

New JMIR MedInform: Machine Learning Prediction of Pharmacogenetic Testing Uptake Among Opioid-Prescribed #patients Using #ehrs: Retrospective Cohort Study

0 0 0 0
Awakari App

HCA Healthcare UK goes live with Meditech EHR HCA Healthcare UK has implemented Meditech Expanse, a cloud-based EHR system, across its 11 acute care facilities and dozens of outpatient locations. T...

#EHRs #/ #Interoperability #Health #IT

Origin | Interest | Match

0 0 0 0
Awakari App

TEFCA vs. CMS-aligned networks for interoperability? 5 notes CMS launched an interoperability pledge program in July to spur information-sharing efforts. But how does it differ from TEFCA, aka the ...

#EHRs #/ #Interoperability #Health #IT

Origin | Interest | Match

0 0 0 0
Unsupervised Calibration for Phenotyping and Association Studies: Learning with Noisy Labels in #EHRs Date Submitted: Nov 25, 2025. Open Peer Review Period: Dec 10, 2025 - Feb 4, 2026.

Reminder>> Unsupervised Calibration for Phenotyping and Association Studies: Learning with Noisy Labels in #EHRs (preprint) #openscience #PeerReviewMe #PlanP

0 0 0 0
Unsupervised Calibration for Phenotyping and Association Studies: Learning with Noisy Labels in #EHRs Date Submitted: Nov 25, 2025. Open Peer Review Period: Dec 10, 2025 - Feb 4, 2026.

Unsupervised Calibration for Phenotyping and Association Studies: Learning with Noisy Labels in #EHRs (preprint) #openscience #PeerReviewMe #PlanP

0 0 0 0
Awakari App

The financial impact of healthcare ransomware attacks: 4 notes Healthcare continues to pay a significant financial toll for ransomware attacks, the U.S. Treasury Department found. Here are four thi...

#EHRs #/ #Interoperability #Health #IT

Origin | Interest | Match

0 0 0 0
Awakari App

The financial impact of healthcare ransomware attacks: 4 notes Healthcare continues to pay a significant financial toll for ransomware attacks, the U.S. Treasury Department found. Here are four thi...

#EHRs #/ #Interoperability #Health #IT

Origin | Interest | Match

0 0 0 0
Awakari App

AHA warns of ‘​10 out of 10’ cyber vulnerability The American Hospital Association is advising hospitals and health systems to fix a cybersecurity flaw that received the highest vulnerability...

#EHRs #/ #Interoperability #Health #IT

Origin | Interest | Match

0 0 0 0
Awakari App

St. Luke’s, Microsoft partner on cybersecurity Bethlehem, Pa.-based St. Luke’s University Health Network has partnered with Microsoft to boost cybersecurity across its 13-hospital and 67-practi...

#EHRs #/ #Interoperability #Health #IT

Origin | Interest | Match

0 0 0 0
Awakari App

California hospital seeks financial partner, citing cyberattack costs Watsonville (Calif.) Community Hospital is looking for a health system partner, with a recent cyberattack contributing to its f...

#EHRs #/ #Interoperability #Health #IT

Origin | Interest | Match

0 0 0 0
Awakari App

How NYU Langone uses its EHR to personalize care at the bedside New York City-based NYU Langone Health recently launched About Me, an initiative that allows patients to share short personal details...

#EHRs #/ #Interoperability #Health #IT

Origin | Interest | Match

0 0 0 0
Awakari App

Hospitals struggle with vendor cyber readiness: 7 notes Hospitals have trouble disconnecting from breached IT vendors and AI platforms, compromising their cybersecurity, Black Book Market Research ...

#EHRs #/ #Interoperability #Health #IT

Origin | Interest | Match

0 0 0 0
Preview
A Bilingual On-Premises AI Agent for Clinical Drafting: Implementation Report of Seamless #ehrs Integration in the Y-KNOT Project Background: Large Language Models (LLMs) have shown promise in reducing clinical documentation burden, yet their real-world implementation remains rare. Especially in South Korea, hospitals face several unique challenges such strict data sovereignty requirements and operating in environments where English is not the primary language for documentation. Therefore, we initiated the Your-Knowledgeable Navigator of Treatment (Y-KNOT) project, aimed at developing an on-premise bilingual LLM-based artificial intelligence (#AI) agent system integrated with #ehrs (EHR) for automated clinical drafting. Objective: We present Y-KNOT project and provide insights into implementing AI-assisted clinical drafting tools within constraints of #healthcare system. Methods: The project involved multiple stakeholders and encompassed three simultaneous processes: LLM development, clinical co-development, and EHR integration. We developed a foundation LLM by pretraining Llama3-8B with Korean and English #medical corpora. During the clinical co-development phase, the LLM was instruction-tuned for specific documentation tasks through iterative cycles that aligned physicians’ clinical requirements, hospital data availability, documentation standards, and technical feasibility. The EHR integration phase focused on seamless AI agent incorporation into clinical workflows, involving document standardization, trigger points definition, and user interaction optimization. Results: The resulting system processes emergency department discharge summaries and preanesthetic assessments with high evaluation scores across multiple clinical metrics while maintaining existing clinical workflows. The drafting process is automatically triggered, and #medical records are automatically fed into the LLM as input. The agent is built on-premises, locating all the architecture inside the hospital. Conclusions: The Y-KNOT project demonstrates the first seamless integration of an AI agent into an EHR system for clinical drafting. In collaboration with various stakeholders, we could derive ways to address key challenges of data security, bilingual requirements, and workflow integration. Our experience highlights a practical and scalable approach to utilizing LLM-based AI agents for other #healthcare institutions, paving the way for broader adoption of LLM-based solutions.

New JMIR MedInform: A Bilingual On-Premises AI Agent for Clinical Drafting: Implementation Report of Seamless #ehrs Integration in the Y-KNOT Project

0 0 0 0
Post image

Catch up on the Evidence Insights, "Turning data into evidence: fit-for-purpose RWD in practice."

This feature explores how #claims, #EHRs, #registries, #genomics, and patient-generated data can be combined to deliver reliable #RealWorldEvidence.

Explore here: becarispublishing.com/spotlights/t...

1 1 0 0
Awakari App

Hacker accesses employee emails at Chicago safety-net hospital An unauthorized party gained access to a “limited number” of employee email accounts at Chicago-based Saint Anthony Hospital, pote...

#EHRs #/ #Interoperability #Health #IT

Origin | Interest | Match

0 0 0 0
Preview
Named Entity Recognition for Chinese Cancer #ehrs—Development and Evaluation of a Domain-Specific BERT Model: Quantitative Study Background: The unstructured data of Chinese cancer electronic #medical records contains valuable #medical expertise. Accurate #medical entity recognition is crucial for building a #medical-assisted decision system. Named entity recognition (NER) in cancer electronic #medical records (EMRs) typically employs general models designed for English #medical records. There is a lack of specialized handling for cancer-specific records and limited application to Chinese #medical records. Objective: This study proposes a specific NER model to enhance the recognition of #medical entities in Chinese cancer electronic #medical records. Methods: Desensitized in#patient electronic #medical records related to breast cancer were collected from a leading hospital in Beijing. Building upon the MC-BERT foundation, the study further incorporated a Chinese cancer corpus for pretraining, resulting in the construction of the ChCancerBERT pretrained model. In conjunction with Dilated-Gated Convolutional Neural Networks, Bidirectional Long Short-Term Memory, Multi-head attention mechanism, and Conditional random field, this model forms a multi-model, multi-level integrated named entity recognition approach. Results: This approach effectively extracts #medical entity features related to symptoms, signs, tests, treatments, and time in Chinese breast cancer electronic #medical records. The entity recognition performance of the proposed model surpasses that of the baseline model and other models compared in the experiment. The F1 score reached 86.93%, precision reached 87.24%, and recall reached 86.61%. The model introduced in this study demonstrates exceptional performance on the CCKS2019 dataset, attaining a precision rate of 87.26%, a recall rate of 87.27%, and an impressive F1 score of 87.26%, surpassing that of existing models. Conclusions: The experiments demonstrate that the approach proposed in this study exhibits excellent performance in named entity recognition within breast cancer electronic #medical records. This advancement will further contribute to clinical decision support for cancer treatment and #research. Additionally, the study reveals that incorporating domain-specific corpora in clinical named entity recognition tasks can further enhance the performance of BERT models in specialized domains.

New JMIR MedInform: Named Entity Recognition for Chinese Cancer #ehrs—Development and Evaluation of a Domain-Specific BERT Model: Quantitative Study

0 0 0 0
Deciphering biomarker signatures for the early diagnosis and prediction of sepsis among adult #Patients in #EHRs: a scoping review #Protocol Date Submitted: Nov 11, 2025. Open Peer Review Period: Nov 11, 2025 - Jan 6, 2026.

Reminder>> Deciphering biomarker signatures for the early diagnosis and prediction of sepsis among adult #Patients in #EHRs: a scoping review #Protocol (preprint) #openscience #PeerReviewMe #PlanP

0 0 0 0