How artificial intelligence is changing the future of the aesthetics arena and shaping a new way to improve outcomes and efficiency.
The aim of medicine is to prevent disease and prolong life; the ideal of medicine is to eliminate the need of a physician,’ said Dr William J Mayo, one of the founders of the famed Mayo Clinic. Fast-forward some 93 years later and it appears we are indeed on our way to achieving this ideal, albeit with machines and Artificial Intelligence (AI).
AI refers to the simulation of human intelligence in machines that are programmed to think and act like humans. It’s expected to be the most impactful technology that we have ever experienced.
And, while Elon Musk has said “inviting AI into the world is like summoning the demon”, many scientists and experts in the field believe AI has the potential to greatly improve the human condition.
While AI has yet to reach its full potential, it’s become an important area of research in healthcare due to the rapid-fire progression of technology, which has allowed the growth of many processes historically reliant on human input. Artificial intelligence, machine learning and mixed reality (virtual and augmented) will play a leading role in treatment planning, decision-making and best practices, as well as working hand-in-hand with doctors in the form of robotic devices.
The integration of AI technology will no doubt foster the next great leaps in medicine and surgery. In particular, big data – the algorithmic process of artificial neural networks that guide large-scale analysis of data for pattern recognition and rapid quantification – is poised to shape healthcare and the advent of true precision medicine.
AI and aesthetic surgery
In this age of rapid technological innovation and transformation, the fields of plastic surgery and cosmetic medicine are changing as much as any other. AI has become particularly important in plastic surgery in a variety of settings. Big data, machine learning, deep learning, natural language processing and facial recognition are examples of AI-based technology that plastic surgeons are beginning to utilise to advance their surgical practice.
Plastic surgery and aesthetic medicine are areas that have the capacity to make use of AI in its full potential. Everyday cognitive tasks of pre-operative assessment, case planning and post-operative decision-making could be streamlined by thinking machines, allowing for increased productivity and improved patient care.
Here we look at some of the technologies shaping the aesthetic practice today and into the future.
Machine learning applications
Machine learning uses algorithms to parse data, learn from that data and make informed decisions based on what it has learned without being explicitly programmed. Deep learning, a subfield of machine learning, goes one step further and structures algorithms in layers to create an “artificial neural network” that can learn and make intelligent decisions on its own.
Machine-learning algorithms are responsible for the vast majority of today’s current AI advancements and applications. It has the ability to analyse big data and learn to recognise patterns, predict outcomes and increase accuracy with each iteration.
A 2020 systematic literature review1 on AI in plastic surgery stated that machine learning can aid plastic surgeons in decision-making through predicting prognoses and diagnoses.
The authors noted that in the early 2000s, Yeong et al developed a model that used data obtained from a portable reflective spectrophotometer to determine burn depth and healing time, with an average accuracy of 86%. The artificial neural network (ANN) was able to differentiate between burns that would heal before or after 14 days, with an accuracy of 96% and 75%, respectively.
More recently was an application that monitored postoperative free flap viability based on skin colour assessed via the photographs of a Samsung Galaxy S2. Photographs of subjects’ hands under different degrees of venous and arterial occlusion were used to train the application which was able to accurately assess the vascular status of new subjects with a sensitivity and specificity of 94% and 98%, respectively.
The literature review also highlighted a 2019 study2 in which a supervised machine learning model was demonstrated to aid surgical planning via automated diagnosis and simulation. The authors developed a supervised machine learning model to successfully diagnose jaw surgery patients from a 3D scan.
This was the first fully automated large-scale clinical 3DMM (3D morphable models) involving supervised learning for diagnostics, risk stratification and treatment simulation. Using databases comprising 10,000 3D face scans of healthy volunteers and patients admitted for orthognathic surgery, the authors trained and validated a 3DMM, and demonstrated its potential for clinical decision making, including
fully-automated diagnosis and surgery simulation. The authors propose their method can help surgeons in surgical planning as well as patient education, potentially transforming patient-specific clinical decision-making in orthognathic surgery and other fields of plastic and reconstructive surgery.
In another 2020 published review3 of the relevant literature, it was found that current machine learning models using convolutional neural networks can evaluate breast mammography and differentiate benign and malignant tumours as accurately as specialist doctors, and motion sensor surgical instruments can collate real-time data to advise intraoperative technical adjustments.
AI has long held great promise in detecting skin cancer. A number of studies have shown how AI can help detect and classify skin cancers, with even greater accuracy than specialist dermatologists.
Mixed reality and the operating room Mixed reality combines virtual reality (a virtual world with virtual objects) and augmented reality (the real world with digital information, acting as a virtual layer on top of the world in front of you).
In mixed reality, digital information is represented by holograms (objects made of light and sound) that appear in the space around you. Through AI, these holograms respond to commands and interact with real-world surfaces in real
time for a more natural and intuitive experience.
Mixed reality and AR-guided technologies now offer surgeons new preoperative planning tools, and a means to collaborate and share expertise across the globe.
In February 2021, surgeons from across the globe undertook 12 mixed reality-support holographic surgeries as part of a 24-hour Microsoft-hosted online event. Alongside the surgeries were 15 roundtables and live interviews on the topic. More than 15,000 viewers from 130 countries took part in this unique experience.
These surgeries with real-life footage during the 24-hour event demonstrated how mixed reality technologies have tremendous potential to greatly enhance how surgeons operate, enrich the learning experiences of doctors and ultimately enhance patient outcomes.
Through a custom app, surgeons were able to interact with anatomical images of their patients in holograms projected in real time in the operating room as well as have critical access to interactive tutorials during the surgeries. The surgeons showed how Microsoft’s HoloLens mixed- reality headsets could enable them to access data, improve accuracy and even facilitate collaboration between doctors worldwide by layering information over the patient as a guide for procedures.
HoloLens is operated with hand gestures and voice commands, enabling surgeons to view three- dimensional holographic images of a patient’s anatomy created from X-rays or other scans. Surgeons can move those virtual images around
to see them from different angles. They can also use the HoloLens to access patient data during surgery, call up videos or documents to help solve problems and contact other specialists for advice.
As reported by Microsoft, Dr John Sledge, an orthopedic surgeon in Lafayette, Louisiana, performed one of the procedures that was part of the HoloLens project, a 10-hour spinal fusion surgery. As surgeons from around the world looked on, Dr Sledge pulled up X-rays and scans of the patient’s lower back to locate
the pieces of hardware that needed to be removed and determine how to best position his instruments to access them. He had also loaded images taken shortly after the patient’s accident so he could explain the preop history to the other surgeons.
Without the HoloLens, Dr Sledge says, he would have been limited to a few images from the patient’s scan on a computer screen.
‘It’s an enormously limited data set that’s available to me,’ he says. ‘With the HoloLens, I can pull up the images I want and make them bigger as needed. I’ve got all the images I need literally right in front of me, in whatever size and clarity that I need them to be.’
HoloLens is also useful for planning surgeries and training, Dr Sledge says. If he’s planning a shoulder operation, for example, he can create a holographic representation of the patient’s shoulder to determine where an implant should be placed or whether the bone needs to be reconstructed.
Instead of printing out a 3D replica of a bone and having just one shot at practising an operation on it, Dr Sledge says, he can test his approach on a hologram as many times as needed. For training, he can generate 3D interactive models of any surgical procedure or even create simulated complications on a hologram — say, a fracture or unexpected bleeding — for doctors in training to solve.
‘Medicine, particularly surgery, is still an apprenticeship. You watch a person operate 100 times before you’re allowed to [operate],’ says Dr Sledge. ‘But now we can have residents run through 100 operations on the HoloLens, complete with rare complications and their solutions. We can do worst-case scenario training. With the HoloLens, we can make a problem occur and the doctor in training has to solve it.’
‘We can standardise the surgical training so that all graduates across the world will have seen and learned how to solve all of the cases in the HoloLens library in their field of training. There’s a huge difference between you watching someone else solve a problem and you having to solve it yourself,’ he adds.
The first HoloLens was released in 2016, followed by HoloLens 2 in November 2019. The device is equipped with sophisticated environmental mapping hardware and an eye-tracking camera that enables it to understand the space it’s being used in and what the user is focused on.
With the ability to connect to Microsoft Teams, HoloLens also enables surgeons to collaborate with and help other surgeons globally. In December 2020, Dr John Erickson, a US orthopaedic hand and upper extremity surgeon, and Dr Thomas Grégory, chief of the department of orthopedic and trauma surgery at Avicenne Hospital in Paris, used this mixed reality technology to assist Brazil surgeon Dr Bruno Gobbato, who repaired a collarbone fracture and performed a shoulder arthroscopy.
Drs Gregory and Erickson were linked to Dr Gobbato’s headset via the Microsoft Dynamics 365 Remote Assist app and shared his field of view on their computer screens through Microsoft Teams. They could see the patient and the holographic images Dr Gobbato generated from a CT scan, one showing the patient’s damaged clavicle and another replicating his healthy clavicle. The three surgeons on three continents discussed how to approach the procedure, conferring on each step and sharing their respective approaches.
‘They were my partners helping me with the surgery,’ Dr Gobbato said in a Microsoft news release. ‘We had a French perspective, we had an American perspective, and we had a Latin American perspective. We had one-quarter of the world inside the operating room.’
‘It’s valuable to surgeons to go through the surgery ahead of time, and it leads to better outcomes for patients because you are mentally checked in, have planned the surgery and know what to expect,’ Dr Erickson told Health Tech magazine.
Such software now allows surgeons to create immersive 3D models of anatomical parts to better prepare for procedures, which can result in shorter operating times, less blood loss and less risk.
‘Eventually, you could have your entire operating team wearing devices and sharing an environment where even your techs and residents can see the same model you are using. We are really just scratching the surface,’ he adds.
Applications in medical imaging
AI has become the most discussed topic today in medical imaging research, both in diagnostic and therapeutic, and is already being
used in radiology in a number of ways, such as computer-aided detection for cancer, auto- segmentation of organs in 3D postprocessing.
Multiple studies have indicated that AI tools can perform just as well, if not better, than human clinicians at identifying features in images quickly and precisely.
In a 2017 study4 from Case Western Reserve University, researchers found that a deep learning network identified the presence of invasive forms of breast cancer in pathology images with 100 percent accuracy.
At Indiana University-Purdue University Indiana, machine learning also reached 100 percent accuracy when asked to predict remission rates for acute myelogenous leukemia.
In 2016, Stanford University researchers trained a computer to accurately identify the differences between two types of lung cancer, and ended up with an algorithm that could predict survival rates more accurately than its human counterparts.
‘Pathology as it is practiced now is very subjective,’ said Michael Snyder, PhD, Professor and Chair of Genetics at Stanford University. ‘Two highly skilled pathologists assessing the same slide will agree only about 60 percent of the time. This approach replaces the subjectivity with sophisticated, quantitative measurements that we feel are likely to improve patient outcomes.’
The machine learning tool was able to identify many more cancer-specific characteristics than can be observed by clinicians, offering the possibility of more personalised treatments and therapies.
Standardised 3D and 4D advanced imaging has been used for some time in the plastic surgery arena by technology platforms such as Crisalix and Vectra to simulate results and planning in the preoperative setting.
Facial recognition technology
Combining image analysis and deep neural networks, facial recognition technology (FRT) recognises and interprets patterns to take unique biometric measurements that are used to interpret facial characteristics.
In yet another application of AI technology, FRT is being increasingly used and refined to reveal a range of medical conditions. The Face2Gene clinician-only app is a suite of phenotyping applications that facilitate comprehensive and precise genetic evaluations. The app also allows clinicians to detect rare genetic conditions, which can be missed by physicians simply because they might not have come across those during their clinical practice.
Anura, a consumer app “intended to improve your awareness of general wellness”, developed by Nuralogix, uses FRT along with transdermal optical imaging to extract facial blood flow information from the face. It is the world’s first app allowing for contactless blood pressure measurement, without the need for cuffs or other wearables.
The app also measures other physiological and psychological indexes, including heart rate, stress level and a range of cardiovascular parameters – all from a front-facing smartphone camera.
Specific to plastic surgery, one FRT model, published in Plastic and Reconstructive Surgery in 20165, was able to classify facial beauty in patients relative to postoperative target features, which may be beneficial in estimating patient satisfaction and setting appropriate expectations before surgery.
What’s on the horizon: big data portals & robotic surgeons
Murphy and Saleh3 predict the individualisation of evidence-based medicine – precision medicine – is imminent. AI and the capture of big data will accelerate the evolution of precision medicine, particularly through the incorporation of
genetic data. The authors note that given the variability in data acquisition across healthcare platforms, machine learning is key to assimilate historical data and project a benefit to both patients and healthcare providers. Output functions may complement the plastic surgeon’s cognition to influence clinical decisions, predict the success of interventions and calculate the risk of postoperative complications.
Centralised big data portals could collate information submitted by plastic surgeons across the world to create large databases for interpretation using AI algorithms. ‘These portals could accelerate our understanding of disease pathogeneses and genotypic risks,’ write Murphy and Saleh, ‘and could deduce best-practice protocols for aspects of plastic surgery that currently lack robust evidence.’ Such scenarios might include defining optimal margins for skin cancer excisions and predicting failures following oncological reconstructions, they add.
In terms of robot surgeons, current scientific evidence for AI systems that can perform or complement surgery is limited and interventions remain in their infancy. However, interest in this field continues to grow. The authors note that an AI robotic surgical system acting as a navigational aid for surgeons while operating has been shown to aid intraoperative decisions. As well, an autonomous robotic surgical system which uses supervised AI to perform basic surgical procedures without requiring direct involvement of a surgeon has also been developed.
When performing a number of very basic surgical skills on porcine tissues, outcomes were better than expert surgeons and robot assisted surgery, showing the dexterity and cognition required for basic surgical skills can be programmed into an AI model.
‘Robotics is an ideal technology for the operating room, especially when it comes to what is known as ‘collaborative robotics’,’ says Thomas Heiliger from Brainlab medical technology company. ‘These devices, designed to work hand-in-hand with an operator, do not replace the surgeon or other OR staff but rather support the team in ways a human being cannot. Surgical robots have the ability to be more precise than the human hand and complete repetitive tasks easily and consistently.
‘As the technology continues to advance, I expect to see the development of entirely new surgical techniques that are only possible with robotic assistance. I personally believe that this will be one of the most significant advancements in surgery this decade.’
Plastic surgeons at the precipice of change Plastic surgeons have long been regarded as innovators of change and early adopters of new techniques and technologies. This adaptability will be crucial to implement the forthcoming digital advancements.
In a 2019 article6 on AI and precision medicine, the authors note: ‘Plastic surgeons are innovators and are frequently at the forefront of novel advancements in medicine. From the development of skin grafts to transplantation, the field of plastic and reconstructive surgery has grown tremendously because of our ability to incorporate new technologies rapidly and successfully.
‘The plastic surgeon’s ability to be “plastic” will enable us to fully capture the potential of thinking machines. In recent years, AI has been integrated into many fields that require imaging, including radiology and pathology. We envision that when the field of plastic surgery embraces big data, the cognition involved in patient diagnosis, surgical planning and outcome assessment may be accomplished by the computer.’
Until then, it’s definitely a case of “watch this space”. AMP
1. Jarvis T, Thornburg D, Rebecca AM, Teven CM. Artificial Intelligence in Plastic Surgery: Current Applications, Future Directions, and Ethical Implications. Plast Reconstr Surg Glob Open. 2020;8(10):e3200. Published 2020 Oct 29. doi:10.1097/GOX.0000000000003200
2. Knoops PGM, Papaioannou A, Borghi A, et al. A machine learning framework for automated diagnosis and computer-assisted planning in plastic and reconstructive surgery. Sci Rep. 2019;9(1):13597. Published 2019 Sep 19. doi:10.1038/s41598-019-49506-1
3. Murphy DC, Saleh DB. Artificial Intelligence in plastic surgery: What is it? Where are we now? What is on the horizon? Ann R Coll Surg Engl. 2020 Oct;102(8):577-580. doi: 10.1308/ rcsann.2020.0158. Epub 2020 Aug 11. PMID: 32777930; PMCID: PMC7538735.
4. Cruz-Roa A, Gilmore H, Basavanhally
A, Feldman M, Ganesan S, Shih NNC, Tomaszewski J, González FA, Madabhushi
A. Accurate and reproducible invasive breast cancer detection in whole-slide images: A Deep Learning approach for quantifying
tumor extent. Sci Rep. 2017 Apr 18;7:46450. doi: 10.1038/srep46450. PMID: 28418027; PMCID: PMC5394452.
5. Kanevsky J, Corban J, Gaster R, Kanevsky A, Lin S, Gilardino M. Big Data and Machine Learning in Plastic Surgery: A New Frontier in Surgical Innovation. Plast Reconstr Surg. 2016 May; 137(5):890e-897e.
6. Kim YJ, Kelley BP, Nasser JS, Chung KC. Implementing Precision Medicine and Artificial Intelligence in Plastic Surgery: Concepts and Future Prospects. Plast Reconstr Surg Glob Open. 2019;7(3):e2113. Published 2019 Mar 11. doi:10.1097/GOX.0000000000002113
Technologies To Watch
3D Printing
‘In plastic surgery, 3D printing has mostly been used for modelling; that is, printing out models of structures and features (eg, the nose), manipulating them on the computer to make them look better, and then printing them out again for comparison against the originals. This is already having an impact on preoperative surgical planning, optimising outcomes, and other aspects of the surgical process,’ says US board-certified plastic surgeon Dr Gary D. Breslow in an article for Zwivel.com.
So far, the technology isn’t really effective for producing the types of implants used in plastic surgery, says Dr Patrick J. Byrne, Professor and Chairman of the Cleveland Clinic Head and Neck Institute and Director of the Division of Facial Plastic and Reconstructive Surgery at the School of Medicine. ‘The constraint is on the quality of the produced structures, and our ability to get these structures to survive upon implantation in the human body,’
he says. ‘It can be done, but with current technology the structures will not be able to withstand and survive the effects of scar contracture and poor blood supply.’
But he says that once researchers can crack the code to achieving product stability, things could rapidly progress.
‘As 3D printers become cheaper and the technology for using these devices more sophisticated, the use of 3D printing for generating tissue and potentially organs will inevitably grow,’ says Dr Gregory A. Buford, a US board-certified plastic surgeon. ‘The use of this practice in plastic surgery could be extended to address areas such as trauma reconstruction where printed materials could be used to replace lost areas of tissue.’
Looking ahead, the concept of 4-dimensional (4D) modelling is being investigated. 4D is 3D printing with the factor of time added, allowing the surgeon to visualise how components interact with one another while in motion.
Personalised tissue engineering
Tissue engineering is not new to medicine – lab-grown bladders and functional vaginas have already been successfully implanted in patients. In the future, it could mean that a host of physical structures, such as skin, can be grown in the lab and then implanted to restore form and function.
The prospect of advances in tissue engineering is exciting to plastic surgeons because of the specialty’s emphasis on manipulation of tissue, says US plastic surgeon Dr Sam Lin. ‘Plastic surgery is sometimes limited due to confinements of available tissue,’ he explains, and this is something tissue engineering might be able to help tackle.
According to Dr Lin, advancements in tissue engineering may aid plastic surgeons in providing patients with more options — such as autologous fat grafting
and the use of skin flaps — and in optimising outcomes.
De-coding AI
(according to Merriam Webster dictionary)
Artificial Intelligence:
A branch of computer science dealing with the simulation
of intelligent behaviour in computers; the ability of a machine to imitate intelligent human behaviour.
Machine Learning:
The process by which a computer is able to improve its own performance (as in analysing image files) by continuously incorporating new data into an existing statistical model.
Big Data:
An accumulation of data that is too large and complex for processing by traditional database management tools; large data sets or systems and solutions developed to manage such large accumulations of data.
Augmented Reality:
An enhanced version of reality created by the use of technology to overlay digital information on an image of something being viewed through a device.
Virtual Reality:
An artificial environment which is experienced through sensory stimuli (such as sights and sounds) provided by a computer and in which one’s actions partially determine what happens in the environment.
Augmented Reality Of Beauty
Augmented reality has been a big part of social media lives for years now – filters and photo editing apps all use AR to allow users to experiment with different effects to change the way they look. AR-enabled virtual makeover apps blend users’ real-time videos with digital overlays to ‘try on’ makeup, hair colour or even plumper lips and a smaller nose – using just an app, the front-facing camera on their smartphone or tablet and intelligent facial recognition technology.
The two major players in the AR beauty and skin market are leading tech providers ModiFace and YouCam. ModiFace was founded by University of Toronto engineering professor Parham Aarabi in 2006 and has powered AR technology for brands like Sephora, Smashbox and Covergirl. It can also simulate hair changes, anti-ageing treatments and more with its futuristic makeover technology. As well, its skin diagnostic technology analyses the user’s skin condition and produces a customised beauty routine, based on scientific research combined with a ModiFace AI algorithm. The virtual makeover platform was acquired by L’Oréal in 2018 – the first time the beauty multinational has ever acquired a tech company.
The other major player in beauty AR is Perfect Corp (best known for its beauty app YouCam Makeup). Based in Taiwan and led by CEO Alice Chang, Perfect Corp boasts more than 900 million downloads globally and 300 brand partners.
The company uses facial landmark tracking technology, which creates a “3D mesh” around users’ faces for realistic virtual makeovers.
Increasingly people are requesting cosmetic surgery to emulate the filtered versions of themselves, which has been associated with body dissatisfaction and body dysmorphic disorder. It is perhaps more essential than ever for cosmetic surgeons to consider the mental health of their patients when advising whether cosmetic surgery is appropriate.