Skip to main content

Artificial intelligence, robotics and eye surgery: are we overfitted?


Eye surgery, specifically retinal micro-surgery involves sensory and motor skill that approaches human boundaries and physiological limits for steadiness, accuracy, and the ability to detect the small forces involved. Despite assumptions as to the benefit of robots in surgery and also despite great development effort, numerous challenges to the full development and adoption of robotic assistance in surgical ophthalmology, remain. Historically, the first in-human–robot-assisted retinal surgery occurred nearly 30 years after the first experimental papers on the subject. Similarly, artificial intelligence emerged decades ago and it is only now being more fully realized in ophthalmology. The delay between conception and application has in part been due to the necessary technological advances required to implement new processing strategies. Chief among these has been the better matched processing power of specialty graphics processing units for machine learning. Transcending the classic concept of robots performing repetitive tasks, artificial intelligence and machine learning are related concepts that has proven their abilities to design concepts and solve problems. The implication of such abilities being that future machines may further intrude on the domain of heretofore “human-reserved” tasks. Although the potential of artificial intelligence/machine learning is profound, present marketing promises and hype exceeds its stage of development, analogous to the seventieth century mathematical “boom” with algebra. Nevertheless robotic systems augmented by machine learning may eventually improve robot-assisted retinal surgery and could potentially transform the discipline. This commentary analyzes advances in retinal robotic surgery, its current drawbacks and limitations, and the potential role of artificial intelligence in robotic retinal surgery.


Artificial intelligence was a term coined in 1955 by McCarthy et al. in a workshop proposal for the following year, to be attended at Dartmouth College [1]. The meeting was to be a collaborative effort aimed at developing machines that could not only use a language, but design concepts and solve problems. In 1988, the first robot-assisted procedure was published by Kwoh [2], a robot-guided brain biopsy. In some ways the concept of machine assistance and autonomous task performance had occurred well prior to McCarthy’s proposal. The birth of mathematical logic, highlighted by such works as e.g., Principia Mathematica in 1912 [3], bolstered thoughts that had occurred throughout the 17th century that applied systematic algebra could ultimately reproduce human thinking, and that this substitution could lead to automated thought. This scientific “boom” inspired latter period science fiction movies, but most importantly many researchers of the time were excited to dedicate their resources and research efforts to these ideas.

By way of example, in 1950 Alan Turing published a then distinguished article [4], some years before the term ‘artificial intelligence’ was first used, and many years before the first robot-assisted clinical procedure was published. In Turing’s paper, he suggested that machines could reproduce human thoughts, and this assertion was based on the principle that human decisions were based on available information. He concluded that this hypothesis could be tested and verified if the machine decision was nearly indistinguishable from the human decision. This concept and its further development, in conjunction with advances in technology, certainly raised concerns for machine–human substitution and stimulated discussion around the question of what would be the eventual role of machinery in decision-making tasks [5,6,7]. Despite whatever concerns existed, early technology development released humans from simple and repetitive tasks. Now decades later, humanity is faced with integrating such advanced technology as self-driving cars [8], FDA-approved surgical robots [9] and AI systems that appear human, in a scaring and “Turing-provocative” way [10]. As many of the technological concepts being developed were designed around human thinking architecture [11], advances in artificial intelligence were also based on the human cortical systems and with relation to the visual pathways [12].

All inspiration and optimism aside, the role of robotic assistance in modifying outcomes in ophthalmology is evolving. However, there remain significant obstacles to consolidate robotics in medicine. These include, but are not limited to, implementation costs, safety concerns, questionable efficiency and unproven efficacy to name a few [13,14,15]. With regards to retinal robotic surgery, obstacles affect to an even greater extent due to the delicate, fragile, transparent, unforgiving, nonregenerative and micron-scale of the target tissue, not to mention the early stage of development of such robots at this time [16,17,18]. With due consideration for the uniquely fragile retinal tissue, most procedures in the constrained intraocular environment demand exceedingly high dexterity, concentration and tremor control leading to questions as how recent advances in robotics and artificial intelligence might prove beneficial while at the same time identifying in which opportunities this technology could be applied.

Human accuracy in retinal microsurgery is reported to be at best between 20 and 40 µm in its lower bound [19], as average human tremor is noted to be approximately 100 µm in its peak-to-peak excursion [20]. Likewise the average human threshold for a tactile perception is reported to be approximately 7.5 mN [21] which coincidently is the force reported in prior work to be sufficient to cause a tear in the retina of a rabbit. [22]. In such a scenario, the robot’s stability, sturdiness and precision might be exceedingly useful and delicate intraocular procedures, e.g. membrane peeling, subretinal treatments and vein cannulation, could benefit from such systems. In 1989 the first robotic assistance in ophthalmology was published by Guerrouad and Vidal [23]. This in turn encouraged further studies in areas from force sensitive instruments [24, 25] to robotic platform development [26,27,28], and some examples of research centers are at Johns Hopkins University, Katholieke Universiteit Leuven, Oxford University and University of California.

Despite these advances, many challenges remain including but not limited to reducing instrument size and cost, implementing fast data processing and adapting systems to smaller work environments. It was not until 2018 (18 years after da Vinci robot approval for laparoscopic use), that the first human study with an eye robot was published [29]. Although these provided promising results, other questions on safety and efficiency, not to mention cost-remain. Although they are able to be used during the entire vitrectomy procedure, the loss of instrument awareness and limitations on robot’s end-effector velocity and tilting angle lean their usage to be tasks-specific and the macula to be a preferable target site. Emerging directions in instrument awareness and research development include force sensing instruments [24, 30]—with present generation tools utilizing optical sensors. Robots equipped with force sensing capabilities are intriguing in part due to their potential to enhance safety, efficacy and functionality, as well as to provide information feedback from the robotic tool in use, in order to improve function and utility [31]. At present, the robotic platform alone does not replicate the surgical free-hand experience, but further development of the robotic tool, including enhanced sensor input and incorporation of machine learning shows promise.

Artificial intelligence and in particular machine learning is achieving increasing utility in ophthalmology. Recent advances were made possible by incorporation of graphical processing units (GPU) into machine learning tasks [32, 33]. With large amounts of data available, some algorithms are now reporting advantages in diagnostics, and even in outcomes prediction for many prevalent conditions in ophthalmology. Notable among these are retina and glaucoma [34,35,36,37]. Despite potential benefits, for a number of applications of high interest, sufficiently detailed and categorized data is frequently not available. When data is insufficient, the developer may apply data analysis techniques such as cross-validation, ensemble and regularization in order to reduce the “overfitting”—the conditions from which the algorithm does not learn although it has “memorized” the data [38]. Despite these and other analytical tools, there are times when insufficient quality data is simply insufficient, and at other times it is simply not feasible to produce data on the requisite scale.

In addressing challenges with data availability, there are evolving developments in data acquisition, recording and display. Examples might include improved image quality as demonstrated by the difference between time domain Optical Coherence Tomography (OCT) images from 1991 [39] and the present swept source OCT quality of 2019. Novel image sources as provided by for example, increased adoption of “heads-up surgery” [40]—a 3D viewing system with an embedded 3D camera that records and is reproduced in a 4 K television. Other areas of data improvement include but are not limited to collection of data of higher quality, consistency, and availability. Further improvements in image quality, increasing data volumes, strategic categorization of data, are providing the foundations for next generation tools in ophthalmology and for the emergence of artificial intelligence/machine learning.

As a future perspective for robot-assisted eye surgery, camera image information combined with other sources of data such as intraoperative OCT images, robot end-effector position and force sensing measurements could improve the ability of a robot to assist or primarily perform selected tasks during surgery. Evolution of neural networks [41, 42], progress in image acquisition [43, 44] and a significant increase in data usage [45] may enhance the safety and effectiveness of robotic procedures, especially in eye surgery [46]. In addition to providing a new source of data, surgical viewing systems such as the “heads up surgery” systems, might also enable augmented reality during retinal procedures. Virtual reality systems such as Eyesi Surgical (VRmagic GmbH, Mannheim, Germany) and deep learning tracking algorithms could be used with robotic control and may help training, testing, and improving systems to increasingly avoid potential iatrogenic injuries. As improved data is the foundation of advancing artificial intelligence, enhanced safety, efficacy and increased reliability, are potential outcomes of incorporation of robotics into ophthalmology.

In conclusion, neither artificial intelligence nor robotics is a novel concept, until artificial intelligence is strategically incorporated into robotic systems. Many obstacles exist to human end user adoption of robotics including but not limited to cost, size, functional limits, accuracy, human acceptance and importantly, clearly superior outcomes and safety. Early historical concerns related to the role of the human in the decision-making process in robotic surgery has been largely put to rest, however recent developments in artificial intelligence applied to robotics may force the “overfitted” issue to be revisited. In retinal procedures, robotic platforms show a promising role and first human studies are encouraging. That artificial intelligence might enhance these systems is logical, the form that such augmentation takes is only now emerging. The road to feasibility of robotics augmented by artificial intelligence will meet a number of challenges, and there will continue to be a large and essential human roll most especially in the early stages of technology development. What the ultimate form will be is anyone’s guess, as is the eventual role of humans in microsurgery. For now however it is sufficient for engineers and surgeons to work cooperatively to develop cost-effective tools that improve patient safety, enhance procedure efficacy and extend surgical capability all for the betterment of patient care.

Availability of data and materials

Not applicable.



United States Food and Drug Administration


graphics processing unit


optical coherence tomography


  1. 1.

    Mccarthy J, Minsky ML, Rochester N, Shannon CE. Dartmouth summer. AI Mag. 2006.

  2. 2.

    Kwoh Y, Hou J, Jonckheere G, Hayah S. A Robot with improved absolute positioning accuracy got CT-guided stereotactic brain surgery. IEEE Trans Biomed Eng. 1988;55(2):153–60.

    Article  Google Scholar 

  3. 3.

    Whitehead AN, Russell B. Principia mathematica, vol. 2. Cambridge: University Press; 1912.

    Google Scholar 

  4. 4.

    Cooper SB, Leeuwen J Van. Computing machinery and intelligence. In: Alan turing his work impact. 2013. p. 551–621.

    Google Scholar 

  5. 5.

    Bostrom N. Ethical issues in advanced artificial intelligence. In: Science fiction and philosophy: from time travel to superintelligence. 2003.

  6. 6.

    Nath R, Sahu V. The problem of machine ethics in artificial intelligence. AI Soc. 2017;23:441–58.

    Article  Google Scholar 

  7. 7.

    Kumar N, Kharkwal N, Kohli R, Choudhary S. Ethical aspects and future of artificial intelligence. In: 2016 1st international conference on innovation and challenges in cyber security, ICICCS 2016. 2016.

  8. 8.

    Falcone P, Borrelli F, Asgari J, Tseng HE, Hrovat D. Predictive active steering control for autonomous vehicle systems. IEEE Trans Control Syst Technol. 2007.

    Article  Google Scholar 

  9. 9.

    Leal Ghezzi T, Campos Corleta O. 30 Years of Robotic Surgery. World J Surg. 2016;40(10):2550–7.

    Article  Google Scholar 

  10. 10.

    Leviathan Y, Matias Y. Google AI Blog: Google Duplex: an AI system for accomplishing real-world tasks over the phone. Google Blog. 2018.

  11. 11.

    Shanmuganathan S. Artificial neural network modelling: an introduction. In: Shanmuganathan S, Samarasinghe S, editors. Artificial neural network modelling. Cham: Springer; 2016. p. 1–14.

    Google Scholar 

  12. 12.

    Kruger N, Janssen P, Kalkan S, Lappe M, Leonardis A, Piater J, et al. Deep hierarchies in the primate visual cortex: what can we learn for computer vision? IEEE Trans Pattern Anal Mach Intell. 2013;35:1847–71.

    Article  Google Scholar 

  13. 13.

    Ballantyne GH. The pitfalls of laparoscopic surgery: challenges for robotics and telerobotic surgery. In: Surgical laparoscopy, endoscopy and percutaneous techniques. 2002.

    Article  Google Scholar 

  14. 14.

    Amodeo A, Linares Quevedo A, Joseph JV, Belgrano E, Patel HRH. Robotic laparoscopic surgery: cost and training. Minerva Urol Nefrol. 2009;61:121–8.

    CAS  PubMed  Google Scholar 

  15. 15.

    De Almeida JR, Genden EM. Robotic surgery for oropharynx cancer: promise, challenges, and future directions. Curr Oncol Rep. 2012;14:148–57.

    Article  Google Scholar 

  16. 16.

    De Smet MD, Naus GJL, Faridpooya K, Mura M. Robotic-assisted surgery in ophthalmology. Curr Opin Ophthalmol. 2018;29:248–53.

    Article  Google Scholar 

  17. 17.

    Mango CW, Tsirbas A, Hubschman JP. Robotic eye surgery. In: Cutting edge of ophthalmic surgery: from refractive SMILE to robotic vitrectomy. 2017.

    Google Scholar 

  18. 18.

    Molaei A, Abedloo E, De Smet M, Safi S, Khorshidifar M, Ahmadieh H, et al. Toward the art of robotic-assisted vitreoretinal surgery. J Ophthalmic Vision Res. 2017;12:212–8.

    Google Scholar 

  19. 19.

    Jagtap AD, Riviere CN. Applied force during vitreoretinal microsurgery with handheld instruments. 26th Annu Int Conf IEEE Eng Med Biol Soc. 2004;3(1):2771–3.

  20. 20.

    Singhy SPN, Riviere CN. Physiological tremor amplitude during retinal microsurgery. In: Proceedings of the IEEE annual northeast bioengineering conference, NEBEC. 2002.

  21. 21.

    Sunshine S, Balicki M, He X, Olds K, Kang J, Gehlbach P, et al. A force-sensing microsurgical instrument that detects forces below human tactile sensation. Retina. 2013.

  22. 22.

    Gupta PK, Jensen PS, de Juan Jr E. Surgical forces and tactile perception during retinal microsurgery. In: International conference on medical image computing and computer-assisted intervention. 1999.

  23. 23.

    Guerrouad A, Vidal P. Stereotaxical microtelemanipulator for ocular surgery. In: Annual international conference of the IEEE engineering in medicine and biology—proceedings. 1989.

  24. 24.

    Gonenc B, Chamani A, Handa J, Gehlbach P, Taylor RH, Iordachita I. 3-DOF force-sensing motorized micro-forceps for robot-assisted vitreoretinal surgery. IEEE Sens J. 2017;17(11):3526–41.

    Article  Google Scholar 

  25. 25.

    Sun Z, Balicki M, Kang J, Handa J, Russell T, Iordachita I. Development and preliminary data of novel integrated optical micro-force sensing tools for retinal microsurgery. In: Proceeding on IEEE international conference robot automation. 2009. p. 1897–902.

  26. 26.

    Uneri A, Balicki MA, Handa J, Gehlbach P, Taylor RH. New steady-hand eye robot with micro-force sensing for vitreoretinal surgery. J Phys Soc Japan. 2010;9(5):655–8.

    Google Scholar 

  27. 27.

    Nakano T, Sugita N, Ueta T, Tamaki Y, Mitsuishi M. A parallel robot to assist vitreoretinal surgery. Int J Comput Assist Radiol Surg. 2009;4:517–26.

    Article  Google Scholar 

  28. 28.

    Fleming I, Balicki M, Koo J, Iordachita I, Mitchell B, Handa J, et al. Cooperative robot assistant for retinal microsurgery. Lect Notes Comput Sci. 2008;5242 LNCS(PART 2):543–50.

    Article  Google Scholar 

  29. 29.

    Edwards TL, Xue K, Meenink HCM, Beelen MJ, Naus GJL, Simunovic MP, et al. First-in-human study of the safety and viability of intraocular robotic surgery. Nat Biomed Eng. 2018;2:649–56.

    CAS  Article  Google Scholar 

  30. 30.

    Gonenc B, Taylor RH, Iordachita I, Gehlbach P, Handa J. Force-sensing microneedle for assisted retinal vein cannulation. Proc IEEE Sens. 2014;2014:698–701.

    PubMed  PubMed Central  Google Scholar 

  31. 31.

    Ebrahimi A, He C, Patel N, Kobilarov M, Gehlbach P, Iordachita I. Sclera force control in robot-assisted eye surgery: adaptive force control vs. auditory feedback. In: 2019 international symposium on medical robotics (ISMR). 2019. p. 1–7.

  32. 32.

    Chen C, Li K, Ouyang A, Tang Z, Li K. GPU-accelerated parallel hierarchical extreme learning machine on flink for big data. IEEE Trans Syst Man Cybern Syst. 2017;47:2740–53.

    Article  Google Scholar 

  33. 33.

    Gawehn E, Hiss JA, Brown JB, Schneider G. Advancing drug discovery via GPU-based deep learning. Expert Opin Drug Discov. 2018;13:579–82.

    Article  Google Scholar 

  34. 34.

    Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA J Am Med Assoc. 2016;316:2402–10.

    Article  Google Scholar 

  35. 35.

    Poplin R, Varadarajan AV, Blumer K, Liu Y, McConnell MV, Corrado GS, et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat Biomed Eng. 2018;2:158–64.

    Article  Google Scholar 

  36. 36.

    Asaoka R, Murata H, Hirasawa K, Fujino Y, Matsuura M, Miki A, et al. Using deep learning and transfer learning to accurately diagnose early-onset glaucoma from macular optical coherence tomography images. Am J Ophthalmol. 2019;198:136–45.

    Article  Google Scholar 

  37. 37.

    Asaoka R, Murata H, Iwase A, Araie M. Detecting preperimetric glaucoma with standard automated perimetry using a deep learning classifier. Ophthalmology. 2016;123:1974–80.

    Article  Google Scholar 

  38. 38.

    Adya M, Collopy F. How effective are neural networks at forecasting and prediction? A review and evaluation. J Forecast. 2005;17:481–95.

    Article  Google Scholar 

  39. 39.

    Huang D, Swanson EA, Lin CP, Schuman JS, Stinson WG, Chang W, et al. Optical coherence tomography. Science. 1991;254(5035):1178.

    CAS  Article  Google Scholar 

  40. 40.

    Eckardt C, Paulo EB. Heads-up surgery for vitreoretinal procedures : an experimental and clinical study. Retina. 2016;36:137–47.

    Article  Google Scholar 

  41. 41.

    Aviles AI, Alsaleh SM, Hahn JK, Casals A. Towards retrieving force feedback in robotic-assisted surgery: a supervised neuro-recurrent-vision approach. IEEE Trans Haptics. 2017;10(3):431–43.

    Article  Google Scholar 

  42. 42.

    Marban A, Srinivasan V, Samek W, Fernández J, Casals A. A recurrent convolutional neural network approach for sensorless force estimation in robotic surgery. Biomed Signal Process Control. 2019;50:134–50.

    Article  Google Scholar 

  43. 43.

    Sarikaya D, Corso JJ, Guru KA. Detection and localization of robotic tools in robot-assisted surgery videos using deep neural networks for region proposal and detection. IEEE Trans Med Imaging. 2017;36:1542–9.

    Article  Google Scholar 

  44. 44.

    Volkov M, Hashimoto DA, Rosman G, Meireles OR, Rus D. Machine learning and coresets for automated real-time video segmentation of laparoscopic and robot-assisted surgery. In: Proceedings—IEEE international conference on robotics and automation. 2017.

  45. 45.

    Levine S, Pastor P, Krizhevsky A, Ibarz J, Quillen D. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Int J Rob Res. 2018;37(4–5):421–36.

    Article  Google Scholar 

  46. 46.

    He C, Patel N, Shahbazi M, Yang Y, Gehlbach PL, Kobilarov M, et al. Toward safe retinal microsurgery: development and evaluation of an RNN-based active interventional control framework. IEEE Trans Biomed Eng. 2019.

    Article  PubMed  Google Scholar 

Download references


Not applicable.


This study was funded by Instituto da Visao - IPEPO (Sao Paulo, Brazil) and Lemann Foundation. Research to Prevent Blindness, New York, New York, USA, and gifts by the J. Willard and Alice S. Marriott Foundation, the Gale Trust, Mr. Herb Ehlers, Mr. Bill Wilbur, Mr. and Mrs. Rajandre Shaw, Ms. Helen Nassif, Ms Mary Ellen Keck, Don and Maggie Feiner, and Mr. Ronald Stiff.

Author information




MGU wrote the manuscript. PLG contributed in conception and in writing of the manuscript. NP, CH, AE, RT and IO substantively revised the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Peter L. Gehlbach.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The work of M. Urias was supported by the Instituto da Visão (IPEPO) and Lemann Foundation. The work of PLG was supported in part by Research to Prevent Blindness, New York, USA, and gifts by the J. Willard and Alice S. Marriott Foundation, the Gale Trust, Mr. Herb Ehlers, Mr. Bill Wilbur, Mr. and Mrs. Rajandre Shaw, Ms. Helen Nassif, Ms. Mary Ellen Keck, and Mr. Ronald Stiff.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Urias, M.G., Patel, N., He, C. et al. Artificial intelligence, robotics and eye surgery: are we overfitted?. Int J Retin Vitr 5, 52 (2019).

Download citation


  • Robotic surgical procedures
  • Robotics
  • Artificial intelligence
  • Retina
  • Ophthalmology