{"id":568738,"date":"2023-12-06T09:00:12","date_gmt":"2023-12-06T14:00:12","guid":{"rendered":"https:\/\/www.therobotreport.com\/?p=568738"},"modified":"2023-12-06T14:57:37","modified_gmt":"2023-12-06T19:57:37","slug":"terra-ai-provides-4d-perception-safer-robot-navigation-manipulation-says-stereolabs","status":"publish","type":"post","link":"https:\/\/www.therobotreport.com\/terra-ai-provides-4d-perception-safer-robot-navigation-manipulation-says-stereolabs\/","title":{"rendered":"Terra AI provides 4D perception for robot navigation, manipulation"},"content":{"rendered":"<div align=\"center\">\n<p><iframe loading=\"lazy\" title=\"Quick look at Stereolabs AI tools for robotics\" width=\"740\" height=\"416\" src=\"https:\/\/www.youtube.com\/embed\/O4lQSp_99bo?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen><\/iframe><\/p>\n<\/div>\n<p>Stereolabs Inc. today introduced Terra AI as the latest addition to its vision portfolio. The new platform uses monocular and stereo vision to provide 4D volumetric perception, according to the company.<\/p>\n<p>\u201cAs we further collaborate with manufacturers and system integrators, our latest Terra AI technology aims to address a growing need for a more open, adaptable, and scalable vision platform delivering 360-degree perception and driving true autonomy,\u201d stated Cecile Schmollgruber, CEO of Stereolabs. \u201cTerra AI will allow companies across industries to invest in autonomous robotics at scale, helping them run their operations with greater efficiency and precision at lower cost.\u201d<\/p>\n<p>Stereolabs provides 3D depth and motion-sensing <a href=\"https:\/\/www.stereolabs.com\/our-technology\" target=\"_blank\" rel=\"noopener\">systems<\/a> based on stereo <a href=\"https:\/\/www.therobotreport.com\/category\/technologies\/cameras-imaging-vision\/\" target=\"_blank\" rel=\"noopener\">vision<\/a> and artificial intelligence. The <a href=\"https:\/\/www.therobotreport.com\/?s=stereolabs\" target=\"_blank\" rel=\"noopener\">company<\/a>, which has offices in New York, San Francisco, and Paris, said its hardware and software stacks and ecosystem support developers of robots, drones, and vehicles.<\/p>\n<hr \/>\r\n<p style=\"text-align: center;\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-568305\" src=\"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2024\/01\/RSE24_728x90_RegOpen_Vs1.jpg\" alt=\"SITE AD for the 2024 Robotics Summit registration.\" width=\"728\" height=\"90\" \/><a href=\"https:\/\/www.roboticssummit.com\/\">Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.<\/a><\/p>\r\n\r\n\r\n<hr \/>\n<h2>Stereolabs designs for simultaneous image processing<\/h2>\n<p>Industries ranging from agriculture and construction to logistics and more are under pressure to increase and accelerate production, even as they contend with labor shortages, noted Stereolabs. Such pressure can lead to unsafe working conditions, but robots can alleviate the problems &#8212; if they can evaluate their surroundings, interpret data, and complete tasks independently, it said.<\/p>\n<p>&#8220;Advanced sensing technology is crucial for unlocking those capabilities, and Terra AI is answering the call,&#8221; Stereolabs asserted.<\/p>\n<p>Vision-based full autonomy for robots requires multiple tasks to be performed simultaneously and efficiently on embedded hardware, said the company. They include depth estimation for spatial awareness, visual-inertial odometry for indoor and outdoor localization, semantic understanding for obstacle classification, terrain mapping, and robust detection of people surrounding the robot for safety.&nbsp;<\/p>\n<p>Conventional sensing technology can only process images from one or two cameras at a time, said Stereolabs. Terra AI is designed to perform depth, localization, semantic, obstacle detection, and fusion from multiple cameras in both space and time. The company said it can process images from six to eight front, back, and surround cameras for autonomous systems.<\/p>\n<p>&#8220;This new technology represents a leap forward in the ability for autonomous robotics to perceive space,&#8221; it added.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-568749\" src=\"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2023\/12\/Terra-AI-Diagram-featured.jpg\" alt=\"screenshot of terra AI.\" width=\"770\" height=\"500\" srcset=\"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2023\/12\/Terra-AI-Diagram-featured.jpg 770w, https:\/\/www.therobotreport.com\/wp-content\/uploads\/2023\/12\/Terra-AI-Diagram-featured-300x195.jpg 300w, https:\/\/www.therobotreport.com\/wp-content\/uploads\/2023\/12\/Terra-AI-Diagram-featured-150x97.jpg 150w, https:\/\/www.therobotreport.com\/wp-content\/uploads\/2023\/12\/Terra-AI-Diagram-featured-768x499.jpg 768w, https:\/\/www.therobotreport.com\/wp-content\/uploads\/2023\/12\/Terra-AI-Diagram-featured-368x238.jpg 368w\" sizes=\"(max-width: 770px) 100vw, 770px\" \/><\/p>\n<h2>Jetson and Terra AI offer affordable perception<\/h2>\n<p>Stereolabs said its vision platform combines Terra AI with a high-performance NVIDIA Jetson module, as well as software and the ZED stereo and new mono cameras.<\/p>\n<p>Terra AI is a large Neural Network Architecture that Stereolabs said uses fast, memory-efficient, and low-level code to handle images simultaneously on low-power embedded computers. This enables the processing of high-frequency, high-volume data from ZED cameras on an embedded NVIDIA Jetson platform.<\/p>\n<p>Terra AI also provides a scalable and affordable way to use low-cost cameras with NVIDIA&#8217;s Jetson Orin GPU to provide a 360-degree view around autonomous machines, said Stereolabs. With the enhanced perception, tractors, mowers, utility vehicles, autonomous mobile robots (AMRs), and robotic arms can safely navigate and manipulate without expensive radar or lidar, it said.<\/p>\n<p><iframe loading=\"lazy\" title=\"Stereolabs x NVIDIA | Stereo Vision Meets Simulation: ZED and ZED SDK in Isaac Sim\" width=\"740\" height=\"416\" src=\"https:\/\/www.youtube.com\/embed\/lX6OBTshuO4?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen><\/iframe><\/p>\n<h2>ZED SDK support to come&nbsp;<\/h2>\n<p>Terra AI is now in research preview. It will be available to Stereolabs Enterprise customers in the second quarter 2024 and later on to all users via the ZED software development kit (SDK). More than 100,000 developers worldwide use the ZED SDK for systems that operate around the clock, said the <a href=\"https:\/\/www.stereolabs.com\/our-technology\" target=\"_blank\" rel=\"noopener\">company<\/a>.<\/p>\n<p>The ZED SDK can scale from entry-tier, front-camera applications to higher levels of automation that require comprehensive surround-view camera applications, said Stereolabs.<\/p>\n<p>It added that the SDK supports flexible deployment options, allowing for a common implementation of features and requirements across machine types.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Terra AI is part of Stereolabs&#8217; vision technology stack intended to provide more accurate perception to autonomous systems.<\/p>\n","protected":false},"author":90,"featured_media":568749,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"rbr50_analysis":"","rbr50_state":"","rbr50_country":"","rbr50_description":"","rbr50_numemps":"","rbr50_text_taxonomy_radio":"","rbr50_text_taxonomy_select":"","rbr50_url":"","rbr50_yearfounded":"","_genesis_hide_title":false,"_genesis_hide_breadcrumbs":false,"_genesis_hide_singular_image":false,"_genesis_hide_footer_widgets":false,"_genesis_custom_body_class":"","_genesis_custom_post_class":"","_genesis_layout":"","ngg_post_thumbnail":0,"footnotes":""},"categories":[1391,1382,2013,1290,2019,2005,1388,2010,1401],"tags":[3961],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v22.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Terra AI provides 4D perception for robot navigation, manipulation - The Robot Report<\/title>\n<meta name=\"description\" content=\"Terra AI is part of Stereolabs&#039; vision technology stack intended to provide more accurate perception to autonomous systems.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.therobotreport.com\/terra-ai-provides-4d-perception-safer-robot-navigation-manipulation-says-stereolabs\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Terra AI provides 4D perception for robot navigation, manipulation - The Robot Report\" \/>\n<meta property=\"og:description\" content=\"Terra AI is part of Stereolabs&#039; vision technology stack intended to provide more accurate perception to autonomous systems.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.therobotreport.com\/terra-ai-provides-4d-perception-safer-robot-navigation-manipulation-says-stereolabs\/\" \/>\n<meta property=\"og:site_name\" content=\"The Robot Report\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-06T14:00:12+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-12-06T19:57:37+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2023\/12\/Terra-AI-Diagram-featured.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"770\" \/>\n\t<meta property=\"og:image:height\" content=\"500\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"The Robot Report Staff\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@therobotreport\" \/>\n<meta name=\"twitter:site\" content=\"@therobotreport\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"The Robot Report Staff\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.therobotreport.com\/terra-ai-provides-4d-perception-safer-robot-navigation-manipulation-says-stereolabs\/\",\"url\":\"https:\/\/www.therobotreport.com\/terra-ai-provides-4d-perception-safer-robot-navigation-manipulation-says-stereolabs\/\",\"name\":\"Terra AI provides 4D perception for robot navigation, manipulation - The Robot Report\",\"isPartOf\":{\"@id\":\"https:\/\/www.therobotreport.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.therobotreport.com\/terra-ai-provides-4d-perception-safer-robot-navigation-manipulation-says-stereolabs\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.therobotreport.com\/terra-ai-provides-4d-perception-safer-robot-navigation-manipulation-says-stereolabs\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2023\/12\/Terra-AI-Diagram-featured.jpg\",\"datePublished\":\"2023-12-06T14:00:12+00:00\",\"dateModified\":\"2023-12-06T19:57:37+00:00\",\"author\":{\"@id\":\"https:\/\/www.therobotreport.com\/#\/schema\/person\/4eea693bdcb411b82c1eb5d4fa1bc43e\"},\"description\":\"Terra AI is part of Stereolabs' vision technology stack intended to provide more accurate perception to autonomous systems.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.therobotreport.com\/terra-ai-provides-4d-perception-safer-robot-navigation-manipulation-says-stereolabs\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.therobotreport.com\/terra-ai-provides-4d-perception-safer-robot-navigation-manipulation-says-stereolabs\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.therobotreport.com\/terra-ai-provides-4d-perception-safer-robot-navigation-manipulation-says-stereolabs\/#primaryimage\",\"url\":\"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2023\/12\/Terra-AI-Diagram-featured.jpg\",\"contentUrl\":\"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2023\/12\/Terra-AI-Diagram-featured.jpg\",\"width\":770,\"height\":500,\"caption\":\"Terra AI does sensor fusion for greater robot precision. Source: Stereolabs\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.therobotreport.com\/terra-ai-provides-4d-perception-safer-robot-navigation-manipulation-says-stereolabs\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.therobotreport.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Terra AI provides 4D perception for robot navigation, manipulation\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.therobotreport.com\/#website\",\"url\":\"https:\/\/www.therobotreport.com\/\",\"name\":\"The Robot Report\",\"description\":\"Robotics news, research and analysis\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.therobotreport.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.therobotreport.com\/#\/schema\/person\/4eea693bdcb411b82c1eb5d4fa1bc43e\",\"name\":\"The Robot Report Staff\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.therobotreport.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/d060fc8cbe3fca6afc132da010a1f801?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/d060fc8cbe3fca6afc132da010a1f801?s=96&d=mm&r=g\",\"caption\":\"The Robot Report Staff\"},\"url\":\"https:\/\/www.therobotreport.com\/author\/trr-editor\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Terra AI provides 4D perception for robot navigation, manipulation - The Robot Report","description":"Terra AI is part of Stereolabs' vision technology stack intended to provide more accurate perception to autonomous systems.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.therobotreport.com\/terra-ai-provides-4d-perception-safer-robot-navigation-manipulation-says-stereolabs\/","og_locale":"en_US","og_type":"article","og_title":"Terra AI provides 4D perception for robot navigation, manipulation - The Robot Report","og_description":"Terra AI is part of Stereolabs' vision technology stack intended to provide more accurate perception to autonomous systems.","og_url":"https:\/\/www.therobotreport.com\/terra-ai-provides-4d-perception-safer-robot-navigation-manipulation-says-stereolabs\/","og_site_name":"The Robot Report","article_published_time":"2023-12-06T14:00:12+00:00","article_modified_time":"2023-12-06T19:57:37+00:00","og_image":[{"width":770,"height":500,"url":"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2023\/12\/Terra-AI-Diagram-featured.jpg","type":"image\/jpeg"}],"author":"The Robot Report Staff","twitter_card":"summary_large_image","twitter_creator":"@therobotreport","twitter_site":"@therobotreport","twitter_misc":{"Written by":"The Robot Report Staff","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.therobotreport.com\/terra-ai-provides-4d-perception-safer-robot-navigation-manipulation-says-stereolabs\/","url":"https:\/\/www.therobotreport.com\/terra-ai-provides-4d-perception-safer-robot-navigation-manipulation-says-stereolabs\/","name":"Terra AI provides 4D perception for robot navigation, manipulation - The Robot Report","isPartOf":{"@id":"https:\/\/www.therobotreport.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.therobotreport.com\/terra-ai-provides-4d-perception-safer-robot-navigation-manipulation-says-stereolabs\/#primaryimage"},"image":{"@id":"https:\/\/www.therobotreport.com\/terra-ai-provides-4d-perception-safer-robot-navigation-manipulation-says-stereolabs\/#primaryimage"},"thumbnailUrl":"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2023\/12\/Terra-AI-Diagram-featured.jpg","datePublished":"2023-12-06T14:00:12+00:00","dateModified":"2023-12-06T19:57:37+00:00","author":{"@id":"https:\/\/www.therobotreport.com\/#\/schema\/person\/4eea693bdcb411b82c1eb5d4fa1bc43e"},"description":"Terra AI is part of Stereolabs' vision technology stack intended to provide more accurate perception to autonomous systems.","breadcrumb":{"@id":"https:\/\/www.therobotreport.com\/terra-ai-provides-4d-perception-safer-robot-navigation-manipulation-says-stereolabs\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.therobotreport.com\/terra-ai-provides-4d-perception-safer-robot-navigation-manipulation-says-stereolabs\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.therobotreport.com\/terra-ai-provides-4d-perception-safer-robot-navigation-manipulation-says-stereolabs\/#primaryimage","url":"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2023\/12\/Terra-AI-Diagram-featured.jpg","contentUrl":"https:\/\/www.therobotreport.com\/wp-content\/uploads\/2023\/12\/Terra-AI-Diagram-featured.jpg","width":770,"height":500,"caption":"Terra AI does sensor fusion for greater robot precision. Source: Stereolabs"},{"@type":"BreadcrumbList","@id":"https:\/\/www.therobotreport.com\/terra-ai-provides-4d-perception-safer-robot-navigation-manipulation-says-stereolabs\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.therobotreport.com\/"},{"@type":"ListItem","position":2,"name":"Terra AI provides 4D perception for robot navigation, manipulation"}]},{"@type":"WebSite","@id":"https:\/\/www.therobotreport.com\/#website","url":"https:\/\/www.therobotreport.com\/","name":"The Robot Report","description":"Robotics news, research and analysis","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.therobotreport.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.therobotreport.com\/#\/schema\/person\/4eea693bdcb411b82c1eb5d4fa1bc43e","name":"The Robot Report Staff","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.therobotreport.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/d060fc8cbe3fca6afc132da010a1f801?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/d060fc8cbe3fca6afc132da010a1f801?s=96&d=mm&r=g","caption":"The Robot Report Staff"},"url":"https:\/\/www.therobotreport.com\/author\/trr-editor\/"}]}},"_links":{"self":[{"href":"https:\/\/www.therobotreport.com\/wp-json\/wp\/v2\/posts\/568738"}],"collection":[{"href":"https:\/\/www.therobotreport.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.therobotreport.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.therobotreport.com\/wp-json\/wp\/v2\/users\/90"}],"replies":[{"embeddable":true,"href":"https:\/\/www.therobotreport.com\/wp-json\/wp\/v2\/comments?post=568738"}],"version-history":[{"count":0,"href":"https:\/\/www.therobotreport.com\/wp-json\/wp\/v2\/posts\/568738\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.therobotreport.com\/wp-json\/wp\/v2\/media\/568749"}],"wp:attachment":[{"href":"https:\/\/www.therobotreport.com\/wp-json\/wp\/v2\/media?parent=568738"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.therobotreport.com\/wp-json\/wp\/v2\/categories?post=568738"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.therobotreport.com\/wp-json\/wp\/v2\/tags?post=568738"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}