AI is technology that is used to solve problems by simulating human intelligence, allowing us to perform data handling in a more rapid but accurate manner. This can be beneficial in diagnostics, therapeutics, and management of existing healthcare data, which is what AI is currently being used for. AI is an umbrella term which encompasses many subcategories. These include machine learning, natural language processing and rule based expert systems.
Machine learning uses data and algorithm analysis which allows learning without the need to explicitly program technology, fundamentally ascertaining new outcomes and performing complex tasks. Machine learning models are trained to use large data sets including ECG data via techniques like parallelisation 5 . This involves integrating a variety of methods such as specialised algorithms, workload split across generators and utilising graphics processing units (GPUs), effectively, combining these techniques together improve the efficiency, speed and accuracy of large datasets and provide high sensitivity and specificity. The specialised algorithms used are specifically designed to carry out thorough analysis and the software trains the machines. For example, within a study using unsupervised learning, the machine was trained without ECG recordings and is trained in annotations performed by cardiologists that would be used in recordings and identifying cardiovascular conditions like arrythmias. Unsupervised learning utilises clustered input which effectively groups similar datapoints together and can then learn based on the annotations for when scanning through ECG datasets. AI is also trained through deep convolutional neural networks which scan through data and predict clinical outcomes, and this can be applied to ECG datasets to predict conditions such as left ventricular dysfunction 1,2 .
Natural language processing is when technology is used to generate language and uses data for computers to comprehend linguistics and interpret this information. Rule- based expert systems exploit data using a set of rules to determine decision-making and employ logical reasoning to execute outcomes.
Currently ongoing studies using AI in areas of cardiovascular research, diagnosis and therapeutics are being developed. One major study has involved the diagnosis of an asymptomatic left ventricular dysfunction (ALVD). ALVD is a type of heart failure resulting in reduced blood flow through the left ventricle leading to loss of contractility, ultimately progressing to chronic heart failure. AI was used within the study via an electrocardiogram (ECG) to determine patient risk of developing the disease within asymptomatic patients and had high levels of accuracy, thus delivering beneficial outcomes within cardiovascular diagnosis.
AI can however pose algorithmic bias and unintentionally propagate health disparities due to biased training in datasets. For example, racial bias was observed in the US when additional healthcare needs wrongly underestimated over 50% of black patients were not in need 8 . This bias presented major bias due to healthcare inequality and prioritised healthcare costs rather than healthcare needs. Using AI across various demographics could open the possibility of data collection bias whether this be sampling or measurements, for example 7 . This may occur in countries with lower socio-economic status as access to healthcare facilities may already be scarce and the patient data used may not be representative of the entire population, creating disproportion in how the algorithms generate outcomes. Meanwhile, measurement bias is another potential risk that may arise and has been observed in a study where men were given more lipid lowering medications than women due to cardiovascular disease risk being higher statistically in men 4 . Many examples of bias from AI data can arise and therefore, it is advantageous for a global effort amongst clinicians, governments, and healthcare institutions to work collaboratively to harness the benefits that AI models can achieve.
Another study where AI has been used in cardiovascular research has been in atrial fibrillation (AF) diagnosis where the sensitivity rate was 82.3% and specificity rate was 83.4% 2 . AF an irregularity and rapid heart rhythm that can lead to other complications such as stroke, heart failure, and other cardiopathies. AF is a common cardiopathy, often asymptomatic and underdiagnosed. The study used machine learning to shed a light on identifying patients in ECG recordings. Moreover, other studies have shown AI modelling to be highly feasible in cardiovascular magnetic resonance imaging with predication of cardiovascular function in acute myocardial infarction patients 3 . The study performed a clinical trial which had shown that with multivariable modelling, outcomes for diagnosis and assessment with p-values <0.001-0.0048. Therefore, AI enhances the diagnostic accuracy, and these studies are amongst many other cardiovascular studies using AI modelling which have obtained significantly higher sensitivity and specificity rates than with clinical only input 6,9 . AI clearly shows increased accuracy and can identify cardiovascular abnormalities that may be unnoticed by human judgement and intelligence, but it has its limitations. When using AI within healthcare systems, informed consent tends to be compromised due to the complexity of platform sharing. There is not always a way to mitigate lack of informed consent in some scenarios. For example, a patient may be entirely unaware of what AI is and may receive proposed treatment for preventing embolic stroke by utilising AI to analyse patient data. This also raises ethical considerations around the clinical efficacy of AI algorithms. How likely it is that the situation be misjudged, and will patients be given unnecessary treatments?
When contemplating the use of AI, legal and regulatory requirements are standard in validating the models for healthcare use. Regulatory bodies such as the Food and Drug Administration (FDA) have provided additional guidance for AI to be used within medical devices which includes having a premarketing review to reflect on any modifications that may need to be made and to risk assess the impact on the public 11 . The additional guidance is to be utilised alongside with the traditional framework for regulation of medical devices. The legal implications of AI include privacy of data, intellectual property rights, accountability, transparency, and algorithm disgorgement. The FDA, therefore, covers all these aspects within its "Artificial Intelligence and Machine Learning Software as a Medical Device Action Plan", which minimises and mitigates the legal implications. The FDA targets AI generated diagnoses and informed consent by having additional scrutiny and tighter regulations when it comes to AI in healthcare.
Despite there being risks to using AI in cardiovascular research, the overall benefits outweigh the risks involved. However, it is important to use AI sensibly within cardiovascular research and harness the beneficial outcomes by proceeding with caution. AI should be viewed as a complementary tool as clinician judgment is important. Clinicians should, therefore, be trained in data analysis and AI methodology as machines follow a different logical pathway than the human brain 10 .
References:
Attia, Z. I., Kapa, S., Lopez-Jimenez, F., McKie, P. M., Ladewig, D. J., Satam, G., Pellikka, P. A., Enriquez-Sarano, M., Noseworthy, P. A., Munger, T. M., Asirvatham, S. J., Scott, C. G., Carter, R. E., & Friedman, P. A. (2019). Screening for cardiac contractile dysfunction using an artificial intelligence–enabled electrocardiogram. Nature medicine , 25 (1), 70-74. https://doi.org/10.1038/s41591-018-0240-2
Attia, Z. I., Noseworthy, P. A., Lopez-Jimenez, F., Asirvatham, S. J., Deshmukh, A. J., Gersh, B. J., Carter, R. E., Yao, X., Rabinstein, A. A., Erickson, B. J., Kapa, S., & Friedman, P. A. (2019). An artificial intelligence-enabled ECG algorithm for the identification of patients with atrial fibrillation during sinus rhythm: a retrospective analysis of outcome prediction. The Lancet , 394 (10201), 861-867. https://doi.org/10.1016/S0140-6736(19)31721-0
Backhaus, S. J., Aldehayat, H., Kowallick, J. T., Evertz, R., Lange, T., Kutty, S., Bigalke, B., Gutberlet, M., Hasenfuß, G., Thiele, H., Stiermaier, T., Eitel, I., & Schuster, A. (2022). Artificial intelligence fully automated myocardial strain quantification for risk stratification following acute myocardial infarction. Scientific reports , 12 (1), 12220. https://doi.org/10.1038/s41598-022-16228-w
Li, S., Fonarow, G. C., Mukamal, K. J., Liang, L., Schulte, P. J., Smith, E. E., DeVore, A., Hernandez, A. F., Peterson, E. D., & Bhatt, D. L. (2016). Sex and Race/Ethnicity–Related Disparities in Care and Outcomes After Hospitalization for Coronary Artery Disease Among Older Adults. Circulation: Cardiovascular Quality and Outcomes , 9 (2_suppl_1), S36-S44. https://doi.org/10.1161/CIRCOUTCOMES.115.002621
Martínez-Sellés, M., & Marina-Breysse, M. (2023). Current and Future Use of Artificial Intelligence in Electrocardiography. Journal of Cardiovascular Development and Disease , 10 (4).
Mohsen, F., Al-Saadi, B., Abdi, N., Khan, S., & Shah, Z. (2023). Artificial Intelligence-Based Methods for Precision Cardiovascular Medicine. Journal of Personalized Medicine , 13 (8).
Nazer, L. H., Zatarah, R., Waldrip, S., Ke, J. X. C., Moukheiber, M., Khanna, A. K., Hicklen, R. S., Moukheiber, L., Moukheiber, D., Ma, H., & Mathur, P. (2023). Bias in artificial intelligence algorithms and recommendations for mitigation. PLOS Digital Health , 2 (6), e0000278. https://doi.org/10.1371/journal.pdig.0000278
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science , 366 (6464), 447-453. https://doi.org/10.1126/science.aax2342
Singh, M., Kumar, A., Khanna, N. N., Laird, J. R., Nicolaides, A., Faa, G., Johri, A. M., Mantella, L. E., Fernandes, J. F. E., Teji, J. S., Singh, N., Fouda, M. M., Singh, R., Sharma, A., Kitas, G., Rathore, V., Singh, I. M., Tadepalli, K., Al-Maini, M., . . . Suri, J. S. (2024). Artificial intelligence for cardiovascular disease risk assessment in personalised framework: a scoping review. eClinicalMedicine , 73 . https://doi.org/10.1016/j.eclinm.2024.102660
Tikhomirov, L., Semmler, C., McCradden, M., Searston, R., Ghassemi, M., & Oakden-Rayner, L. (2024). Medical artificial intelligence for clinicians: the lost cognitive perspective. The Lancet Digital Health , 6 (8), e589-e594. https://doi.org/10.1016/S2589-7500(24)00095-5
US Food and Drug Administration. (2024). Artificial Intelligence and Machine Learning in Software as a Medical Device . Retrieved 13/10/2024 from https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device#regulation