{"id":569928,"date":"2022-06-06T15:41:52","date_gmt":"2022-06-06T19:41:52","guid":{"rendered":"https:\/\/www.therobotreport.com\/?p=569928"},"modified":"2024-01-05T11:03:39","modified_gmt":"2024-01-05T16:03:39","slug":"why-robots-need-to-see","status":"publish","type":"post","link":"https:\/\/www.therobotreport.com\/why-robots-need-to-see\/","title":{"rendered":"Why robots need to see"},"content":{"rendered":"<article class=\"type-post entry\" aria-label=\"Closing the Loop for True Autonomous Plowing\">\n<div class=\"entry-content\">\n<pre>&nbsp;<\/pre>\n<p><a href=\"https:\/\/www.rgorobotics.ai\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"alignright wp-image-60407 size-medium\" src=\"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2022\/04\/R-Go-Robotics-Logo-1-300x101.png\" sizes=\"(max-width: 300px) 100vw, 300px\" srcset=\"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2022\/04\/R-Go-Robotics-Logo-1-300x101.png 300w, https:\/\/www.therobotreport.com\/wp-content\/uploads\/2022\/04\/R-Go-Robotics-Logo-1.png 368w\" alt=\"RGo_Robotics\" width=\"300\" height=\"101\"><\/a>Most autonomous vehicle manufacturers incorporate high-end 3D LiDARs, along with additional sensors, into their vehicles so that they are provided with enough data to fully understand their surroundings and operate safely. Yet in April 2019, Elon Musk famously told attendees at Tesla\u2019s Autonomy Day that LiDAR is a \u201cfool\u2019s errand\u201d\u2014and that anyone relying on it is \u201cdoomed,\u201d referring to Tesla\u2019s preference for vision-based perception.<\/p>\n<p>The LiDAR \/ vision debate continues to this day. But since that time there has been a steadily increasing emphasis on cameras and computer vision in the autonomous vehicle market.<\/p>\n<h2><strong><img loading=\"lazy\" decoding=\"async\" class=\"alignright size-full wp-image-60500\" src=\"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2022\/06\/Side-3-3.png\" sizes=\"(max-width: 200px) 100vw, 200px\" srcset=\"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2022\/06\/Side-3-3.png 200w, https:\/\/www.therobotreport.com\/wp-content\/uploads\/2022\/06\/Side-3-3-150x300.png 150w, https:\/\/www.therobotreport.com\/wp-content\/uploads\/2022\/06\/Side-3-3-119x238.png 119w\" alt=\"Side 3\" width=\"200\" height=\"400\">Vision-based Navigation for AMRs<\/strong><\/h2>\n<p>Recently, the same debate has emerged in the mobile robot market where traditional 2D LiDARs have been the prevailing navigation sensor for decades. Some AMR manufacturers, including Canvas Technology (acquired by Amazon),&nbsp;Gideon Brothers, and&nbsp;Seegrid, have already developed AMRs with varying degrees of vision-based navigation.<\/p>\n<p>One reason why these AMR companies have opted for camera-based navigation solutions is the lower cost of vision systems compared to LiDAR. But the most compelling reason is the ability of vision-based systems to enable full 3D localization and perception.<\/p>\n<\/div>\n<div class=\"entry-content\">\n<h2><strong>Seeking Alternatives<\/strong><\/h2>\n<p>3D LiDAR is also an option for robotics developers looking to add 3D perception capabilities into their systems. But while the price of 3D LiDAR solutions has dropped over the past few years, the total system cost for 3D perception continues to be many thousands of dollars.<\/p>\n<p>For the robotics sector, the cost of automotive grade 3D LiDAR is usually prohibitive. As a result, robot manufacturers continue to seek less expensive alternatives to 3D LiDAR for 3D perception.<\/p>\n<hr>\n<p class=\"large\"><span style=\"color: #808080;\"><strong><em>Cameras can see natural features on the ceiling, floor, and far into the distance on the other side of a facility.<\/em> <\/strong><\/span><\/p>\n<hr>\n<h2><strong>Camera-based Vision Systems<\/strong><\/h2>\n<p>Camera-based vision systems are inherently up to the perception challenge since they can \u2018see\u2019 and digitize everything in their field of view.&nbsp; Leveraging economies-of-scale from other industries, even cameras costing under $20 provide enough resolution and field-of-view to support robust localization, obstacle detection, and higher levels of perception.<\/p>\n<h2><strong><img loading=\"lazy\" decoding=\"async\" class=\"alignright size-full wp-image-60499\" src=\"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2022\/06\/Side-2-2.png\" sizes=\"(max-width: 200px) 100vw, 200px\" srcset=\"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2022\/06\/Side-2-2.png 200w, https:\/\/www.therobotreport.com\/wp-content\/uploads\/2022\/06\/Side-2-2-150x300.png 150w, https:\/\/www.therobotreport.com\/wp-content\/uploads\/2022\/06\/Side-2-2-119x238.png 119w\" alt=\"Side 2\" width=\"200\" height=\"400\">Localization in Challenging Environments<\/strong><\/h2>\n<p>Another important advantage of vision-based navigation is the ability to handle challenging environments where LiDARs lose robustness. The classic example is a logistics warehouse where rows of racks and shelving systems are repeated throughout the facility.<\/p>\n<p>Cameras can also see natural features on the ceiling, floor, and far into the distance on the other side of a facility. But the 2D \u2018slice\u2019 of the world that a LiDAR can see is simply not enough to distinguish between the different, repetitive features in these environments. As a result, LiDAR based robots can get confused or even completely lost in many situations.<\/p>\n<p>These same challenges also apply to open or highly dynamic environments like cross-docking and open warehousing facilities. The \u2018slice\u2019 that LiDAR saw and interpreted during their last visit may now be open space \u2013 or something else altogether.<\/p>\n<\/div>\n<\/article>\n<hr>\n<p class=\"large\"><strong><em><span style=\"color: #808080;\">Ultimately, to achieve truly intelligent autonomous behavior, navigation systems must deliver human-level, 3D perception.<\/span><\/em><\/strong><\/p>\n<article class=\"type-post entry\" aria-label=\"Closing the Loop for True Autonomous Plowing\">\n<div class=\"entry-content\">\n<hr>\n<h2><strong>3D Perception and Scene Understanding<\/strong><\/h2>\n<p>Finally, and most importantly, vision-based perception can enable capabilities that other types of sensors are fundamentally incapable of. Ultimately, to achieve truly intelligent autonomous behavior, navigation systems must deliver human-level, 3D perception. For example, since they can detect texture and color, cameras are able to distinguish between the edge of a sidewalk and the edge of the road. This can create significant safety advantages for delivery robots because the robot can use this visual information to precisely navigate along its edge, just the way a human would.<\/p>\n<p>This capability is useful in warehouses and manufacturing facilities where pedestrian paths are defined with lines and floor markers.&nbsp; Camera-based systems can even read signs and symbols that can alert both humans and robots to temporary closures, wet floors, and detours.&nbsp; Vision-based navigation systems are also able to work in both indoor and outdoor environments \u2013 opening up new use cases and applications.<\/p>\n<h2><strong><img loading=\"lazy\" decoding=\"async\" class=\"alignright size-full wp-image-60498\" src=\"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2022\/06\/Side-1-1-1.png\" sizes=\"(max-width: 200px) 100vw, 200px\" srcset=\"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2022\/06\/Side-1-1-1.png 200w, https:\/\/www.therobotreport.com\/wp-content\/uploads\/2022\/06\/Side-1-1-1-150x300.png 150w, https:\/\/www.therobotreport.com\/wp-content\/uploads\/2022\/06\/Side-1-1-1-119x238.png 119w\" alt=\"Side 1\" width=\"200\" height=\"400\">The Challenge<\/strong><\/h2>\n<p>Converting the large volume of data from cameras into 3D perception on low-cost hardware is a monumental technology and engineering challenge. The process requires a significant AI, computer vision, and sensor fusion expertise on the part of engineers, along with the availability of enabling technologies.<\/p>\n<p>Thankfully, robust, performative solutions for camera-based 3D perception is now to robotics engineers. For example, <strong><span style=\"color: #993300;\"><a style=\"color: #993300;\" href=\"https:\/\/www.rgorobotics.ai\" target=\"_blank\" rel=\"noopener\">RGo Robotics<\/a><\/span><\/strong>\u2019 solution, <strong><span style=\"color: #993300;\"><a style=\"color: #993300;\" href=\"https:\/\/www.rgorobotics.ai\/innovation\" target=\"_blank\" rel=\"noopener\">Perception Engine<\/a><\/span><\/strong>, is a full-stack software solution that enables manufacturers to deliver next-generation capabilities rapidly. It is able to utilize just a single camera in some applications to achieve precise 3D localization and perception. Its wide field of view camera is also able to recognize humans and other obstacles around it. This level of scene understanding allows mobile robots to behave more naturally and collaboratively around humans.<\/p>\n<h2><strong>Additional Modalities<\/strong><\/h2>\n<p>All said, there remains significant value in traditional sensor modalities including LiDAR. Recent advancements in low-cost MEMS 3D LiDARs is encouraging and, when combined with cameras, could add cost effective robustness and rich 3D mapping capabilities to robotics systems.<\/p>\n<p>But Musk was correct in saying that cameras and computer vision should serve as the foundation of any mobile robot navigation system. The next few years will certainly see dynamic changes as the state-of-the-art evolves with advances in both the autonomous vechicle and robotics industries.<\/p>\n<hr>\n<h2><strong>About the Author<br \/>\n<\/strong><img loading=\"lazy\" decoding=\"async\" class=\"alignright wp-image-60488 size-thumbnail\" src=\"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2022\/06\/Secor-Peter-2-150x150.png\" sizes=\"(max-width: 150px) 100vw, 150px\" srcset=\"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2022\/06\/Secor-Peter-2-150x150.png 150w, https:\/\/www.therobotreport.com\/wp-content\/uploads\/2022\/06\/Secor-Peter-2-300x300.png 300w, https:\/\/www.therobotreport.com\/wp-content\/uploads\/2022\/06\/Secor-Peter-2-238x238.png 238w, https:\/\/www.therobotreport.com\/wp-content\/uploads\/2022\/06\/Secor-Peter-2.png 500w\" alt=\"Peter Secor\" width=\"150\" height=\"150\"><\/h2>\n<p><span style=\"font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen-Sans, Ubuntu, Cantarell, 'Helvetica Neue', sans-serif;\">As SVP Marketing &amp; Business Development, Peter Secor is responsible for building RGo Robotics\u2019 brand and identifying new customer and market opportunities for the company.&nbsp; Prior to RGo, he held transformative positions with companies at the leading edge and intersection of IoT, industrial automation, robotics and 3D printing including iRobot and Stratasys.&nbsp; Secor started his career as a management consultant where he specialized in corporate strategy development and M&amp;A for Fortune 500 companies in the industrial automation market including Rockwell Automation, Siemens and Honeywell.&nbsp; He holds a BS in Mechanical Engineering from the University of New Hampshire and an MBA from Columbia University\u2019s Columbia Business School with a concentration in technology growth marketing.<\/span><\/p>\n<\/div>\n<\/article>\n","protected":false},"excerpt":{"rendered":"<p>The autonomous vehicle and robotics sectors often employ LiDAR as the primary system navigation sensor. But cameras and vision-based perception will increasingly serve as the technological underpinning for mobile robots going forward.<\/p>\n","protected":false},"author":146,"featured_media":569925,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"rbr50_analysis":"","rbr50_state":"","rbr50_country":"","rbr50_description":"","rbr50_numemps":"","rbr50_text_taxonomy_radio":null,"rbr50_text_taxonomy_select":null,"rbr50_url":"","rbr50_yearfounded":"","_genesis_hide_title":false,"_genesis_hide_breadcrumbs":false,"_genesis_hide_singular_image":false,"_genesis_hide_footer_widgets":false,"_genesis_custom_body_class":"","_genesis_custom_post_class":"","_genesis_layout":"","ngg_post_thumbnail":0,"footnotes":""},"categories":[2013,2005,2455,2131,1390,2008],"tags":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v22.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Why robots need to see - The Robot Report<\/title>\n<meta name=\"description\" content=\"The autonomous vehicle and robotics sectors often employ LiDAR as the primary system navigation sensor. But cameras and vision-based perception will increasingly serve as the technological underpinning for mobile robots going forward.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.therobotreport.com\/why-robots-need-to-see\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Why robots need to see - The Robot Report\" \/>\n<meta property=\"og:description\" content=\"The autonomous vehicle and robotics sectors often employ LiDAR as the primary system navigation sensor. But cameras and vision-based perception will increasingly serve as the technological underpinning for mobile robots going forward.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.therobotreport.com\/why-robots-need-to-see\/\" \/>\n<meta property=\"og:site_name\" content=\"The Robot Report\" \/>\n<meta property=\"article:published_time\" content=\"2022-06-06T19:41:52+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-01-05T16:03:39+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2023\/12\/Robots_Need_to_See-Feature-1024x650-4.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"650\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Dan Kara\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@RobotReportKara\" \/>\n<meta name=\"twitter:site\" content=\"@therobotreport\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Dan Kara\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.therobotreport.com\/why-robots-need-to-see\/\",\"url\":\"https:\/\/www.therobotreport.com\/why-robots-need-to-see\/\",\"name\":\"Why robots need to see - The Robot Report\",\"isPartOf\":{\"@id\":\"https:\/\/www.therobotreport.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.therobotreport.com\/why-robots-need-to-see\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.therobotreport.com\/why-robots-need-to-see\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2023\/12\/Robots_Need_to_See-Feature-1024x650-4.jpg\",\"datePublished\":\"2022-06-06T19:41:52+00:00\",\"dateModified\":\"2024-01-05T16:03:39+00:00\",\"author\":{\"@id\":\"https:\/\/www.therobotreport.com\/#\/schema\/person\/767c0002abe1f54b46facac7e910b2bc\"},\"description\":\"The autonomous vehicle and robotics sectors often employ LiDAR as the primary system navigation sensor. But cameras and vision-based perception will increasingly serve as the technological underpinning for mobile robots going forward.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.therobotreport.com\/why-robots-need-to-see\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.therobotreport.com\/why-robots-need-to-see\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.therobotreport.com\/why-robots-need-to-see\/#primaryimage\",\"url\":\"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2023\/12\/Robots_Need_to_See-Feature-1024x650-4.jpg\",\"contentUrl\":\"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2023\/12\/Robots_Need_to_See-Feature-1024x650-4.jpg\",\"width\":1024,\"height\":650,\"caption\":\"Robots Vision LiDAR\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.therobotreport.com\/why-robots-need-to-see\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.therobotreport.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Why robots need to see\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.therobotreport.com\/#website\",\"url\":\"https:\/\/www.therobotreport.com\/\",\"name\":\"The Robot Report\",\"description\":\"Robotics news, research and analysis\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.therobotreport.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.therobotreport.com\/#\/schema\/person\/767c0002abe1f54b46facac7e910b2bc\",\"name\":\"Dan Kara\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.therobotreport.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/44bcc464d63a5f8ec3cc9de46471384b?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/44bcc464d63a5f8ec3cc9de46471384b?s=96&d=mm&r=g\",\"caption\":\"Dan Kara\"},\"description\":\"Dan Kara is Vice President, Research &amp; Analyst Services at WTWH Media. He can be reached at dkara@wtwhmedia.com.\",\"sameAs\":[\"https:\/\/x.com\/RobotReportKara\"],\"url\":\"https:\/\/www.therobotreport.com\/author\/dkara\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Why robots need to see - The Robot Report","description":"The autonomous vehicle and robotics sectors often employ LiDAR as the primary system navigation sensor. But cameras and vision-based perception will increasingly serve as the technological underpinning for mobile robots going forward.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.therobotreport.com\/why-robots-need-to-see\/","og_locale":"en_US","og_type":"article","og_title":"Why robots need to see - The Robot Report","og_description":"The autonomous vehicle and robotics sectors often employ LiDAR as the primary system navigation sensor. But cameras and vision-based perception will increasingly serve as the technological underpinning for mobile robots going forward.","og_url":"https:\/\/www.therobotreport.com\/why-robots-need-to-see\/","og_site_name":"The Robot Report","article_published_time":"2022-06-06T19:41:52+00:00","article_modified_time":"2024-01-05T16:03:39+00:00","og_image":[{"width":1024,"height":650,"url":"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2023\/12\/Robots_Need_to_See-Feature-1024x650-4.jpg","type":"image\/jpeg"}],"author":"Dan Kara","twitter_card":"summary_large_image","twitter_creator":"@RobotReportKara","twitter_site":"@therobotreport","twitter_misc":{"Written by":"Dan Kara","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.therobotreport.com\/why-robots-need-to-see\/","url":"https:\/\/www.therobotreport.com\/why-robots-need-to-see\/","name":"Why robots need to see - The Robot Report","isPartOf":{"@id":"https:\/\/www.therobotreport.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.therobotreport.com\/why-robots-need-to-see\/#primaryimage"},"image":{"@id":"https:\/\/www.therobotreport.com\/why-robots-need-to-see\/#primaryimage"},"thumbnailUrl":"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2023\/12\/Robots_Need_to_See-Feature-1024x650-4.jpg","datePublished":"2022-06-06T19:41:52+00:00","dateModified":"2024-01-05T16:03:39+00:00","author":{"@id":"https:\/\/www.therobotreport.com\/#\/schema\/person\/767c0002abe1f54b46facac7e910b2bc"},"description":"The autonomous vehicle and robotics sectors often employ LiDAR as the primary system navigation sensor. But cameras and vision-based perception will increasingly serve as the technological underpinning for mobile robots going forward.","breadcrumb":{"@id":"https:\/\/www.therobotreport.com\/why-robots-need-to-see\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.therobotreport.com\/why-robots-need-to-see\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.therobotreport.com\/why-robots-need-to-see\/#primaryimage","url":"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2023\/12\/Robots_Need_to_See-Feature-1024x650-4.jpg","contentUrl":"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2023\/12\/Robots_Need_to_See-Feature-1024x650-4.jpg","width":1024,"height":650,"caption":"Robots Vision LiDAR"},{"@type":"BreadcrumbList","@id":"https:\/\/www.therobotreport.com\/why-robots-need-to-see\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.therobotreport.com\/"},{"@type":"ListItem","position":2,"name":"Why robots need to see"}]},{"@type":"WebSite","@id":"https:\/\/www.therobotreport.com\/#website","url":"https:\/\/www.therobotreport.com\/","name":"The Robot Report","description":"Robotics news, research and analysis","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.therobotreport.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.therobotreport.com\/#\/schema\/person\/767c0002abe1f54b46facac7e910b2bc","name":"Dan Kara","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.therobotreport.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/44bcc464d63a5f8ec3cc9de46471384b?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/44bcc464d63a5f8ec3cc9de46471384b?s=96&d=mm&r=g","caption":"Dan Kara"},"description":"Dan Kara is Vice President, Research &amp; Analyst Services at WTWH Media. He can be reached at dkara@wtwhmedia.com.","sameAs":["https:\/\/x.com\/RobotReportKara"],"url":"https:\/\/www.therobotreport.com\/author\/dkara\/"}]}},"_links":{"self":[{"href":"https:\/\/www.therobotreport.com\/wp-json\/wp\/v2\/posts\/569928"}],"collection":[{"href":"https:\/\/www.therobotreport.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.therobotreport.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.therobotreport.com\/wp-json\/wp\/v2\/users\/146"}],"replies":[{"embeddable":true,"href":"https:\/\/www.therobotreport.com\/wp-json\/wp\/v2\/comments?post=569928"}],"version-history":[{"count":0,"href":"https:\/\/www.therobotreport.com\/wp-json\/wp\/v2\/posts\/569928\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.therobotreport.com\/wp-json\/wp\/v2\/media\/569925"}],"wp:attachment":[{"href":"https:\/\/www.therobotreport.com\/wp-json\/wp\/v2\/media?parent=569928"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.therobotreport.com\/wp-json\/wp\/v2\/categories?post=569928"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.therobotreport.com\/wp-json\/wp\/v2\/tags?post=569928"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}