Worth a read: New from MIT SPARK Lab, a mechanism for deriving task dependent scene graphs. Take a set of tasks in natural language, and compute the granularity required to interpret the robot’s surroundings to focus upon those elements of the scene relevant to the task. Published in IEEE Robotics and Automation, and available here; “Clio: Real-Time Task-Driven Open-Set 3D Scene Graphs” https://lnkd.in/guuNRdXT I particularly liked this quote from Luca Carlone; “Search and rescue is the motivating application for this work, but Clio can also power domestic robots and robots working on a factory floor alongside humans.”
Matt Ellis’ Post
More Relevant Posts
-
Everyone can see what is the next big thing in AI: ChatGPT+Robot AI. But what is really coming next? Causal Robots. I.e., sophisticated robots equipped with world models, causal understanding, and high-level reasoning. Join our workshop to find out more!
🥁 🥁 🥁 We are excited to introduce the final lineup for the Causal and Object-Centric Representations for Robotics (CORR) CVPR 2024 workshop. Thomas Kipf, Animesh Garg, Sungjin Ahn, Sara Magliacane, Joao F. Henriques, Rares Ambrus, and Alan Yuille will give a talk as invited speakers at CORR workshop! In addition, we would have several contributed talks and poster sessions! So, if you have recent work in Object-centric learning, causality, and robotics and want to get feedback and exchange ideas about it during the workshop, be sure to submit a one-page abstract of the work to Open Review until 10th of May: OpenReview: https://lnkd.in/erErevu9 More details on our website: https://lnkd.in/euKSPG7z
To view or add a comment, sign in
-
-
Would you like to try your planning or machine learning algorithms on a challenging robot manipulation task? See the work of Haining Luo, PhD student at our Personal Robotics Laboratory on benchmarking and simulating bimanual robot shoe lacing, an interesting and fun deformable manipulation task. Details on a paper just out at the IEEE Robotics and Automation Letters: https://lnkd.in/evEEFm8E, or through DOI: https://lnkd.in/eZZ3THHU Open source extensible simulator using the ABB YuMi robot, with some baselines, and a longer video: https://lnkd.in/exSXy_fQ
To view or add a comment, sign in
-
The next honorable mention for the 2023 IEEE Transactions on #Robotics King-Sun Fu Memorial Best Paper Award is a paper we shared several months ago. Paper title: Kinegami: Algorithmic Design of Compliant Kinematic Chains From Tubular Origami Authors: Wei-Hsi Chen , Woohyeok Yang, Lucien Peach, Daniel E. Koditschek GRASP, and Cynthia Sung Paper link: https://lnkd.in/ggRQ7cQf Video caption: The supplemental video presents the introduction, fabrication, and application of a catalog of tubular origami modules and their compositions. It also shows an animated description of the "Kinegami" algorithm and includes a demonstration of an actuated Kinegami catapult. #RobotKinematics #Actuators #OrigamiRobot
Kinegami: Algorithmic Design of Compliant Kinematic Chains from Tubular Origami
To view or add a comment, sign in
-
🔍 Choosing the right camera for vision applications can significantly impact your robotics projects! After a year of working on vision-related robotics, I've learned the importance of understanding the key parameters involved in selecting the right camera. I’ve put together a concise video that breaks down the fundamental concepts to help you determine the ideal camera resolution and lens for your needs. Check it out here: https://lnkd.in/dDFeH_Bz Happy learning! Follow me for weekly insights on robotics!
Systematic approach to choose the camera for Machine Vision appliction
https://www.youtube.com/
To view or add a comment, sign in
-
We have built machine-learning databases for a variety of applications. These data sets revolve around object detection: ✅ Cracks & defects in glass, wire, and steel plate ✅ Weld defects ✅ Tools ✅ Sharps ✅ Edge Recognition ✅ People We continue to research and develop new software. 𝐢𝟑𝐃 𝐫𝐨𝐛𝐨𝐭𝐢𝐜𝐬 - 𝑠ℎ𝑎𝑝𝑖𝑛𝑔 𝑜𝑢𝑟 𝑙𝑒𝑔𝑎𝑐𝑦 𝑡ℎ𝑟𝑜𝑢𝑔ℎ 𝑖𝑛𝑡𝑒𝑙𝑙𝑖𝑔𝑒𝑛𝑡 𝑣𝑖𝑠𝑖𝑜𝑛 𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛𝑠. https://bit.ly/42PYXYa
To view or add a comment, sign in
-
We have built machine-learning databases for a variety of applications. These data sets revolve around object detection: ✅ Cracks & defects in glass, wire, and steel plate ✅ Weld defects ✅ Tools ✅ Sharps ✅ Edge Recognition ✅ People We continue to research and develop new software. 𝐢𝟑𝐃 𝐫𝐨𝐛𝐨𝐭𝐢𝐜𝐬 - 𝑠ℎ𝑎𝑝𝑖𝑛𝑔 𝑜𝑢𝑟 𝑙𝑒𝑔𝑎𝑐𝑦 𝑡ℎ𝑟𝑜𝑢𝑔ℎ 𝑖𝑛𝑡𝑒𝑙𝑙𝑖𝑔𝑒𝑛𝑡 𝑣𝑖𝑠𝑖𝑜𝑛 𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛𝑠. https://bit.ly/42PYXYa
To view or add a comment, sign in
-
Simulation-Based Forward Kinematics! 🎥 Following a recent post on solving forward kinematics for a SCARA robot using Denavit-Hartenberg parameters, here’s an alternative method using a simulation-based approach. This technique brings the kinematic model to life, offering a real-time visual representation of the robot’s motion and end-effector trajectory, enabling clear visualization of the end-effector's path. The end-effector coordinates obtained through this simulation method match those calculated using the Denavit-Hartenberg parameters, confirming the accuracy of both approaches in modeling the SCARA robot’s motion. This approach provides an engaging, dynamic view of SCARA robot functionality, with the end-effector's path validated against the DH parameter method. #Robotics #SCARA #ForwardKinematics #Simulation #MechanicalEngineering #MATLAB #Automation
To view or add a comment, sign in
-
I'm at the Workshop on the Algorithmic Foundations of Robotics (WAFR) in Chicago this week where Yosuke Mizutani is presenting a paper which is a collaboration between myself and Blair Sullivan's group. Can we leverage graph theory concepts, particularly Fixed-Parameter Tractability (FPT) to create or improve robot inspection planning algorithms? That is what we explore in this paper! https://lnkd.in/gXK9C8mN
To view or add a comment, sign in
-
I am very interested in learning about ROS for cooperative robots, but I would like to know what kind of applications it is used for and information about examples. As far as Universal Robots are concerned, it seems that Primary/Secondary/RealTime/RTDE are used for robot communication, but I can build my own remote control system using them. For motion simulation, I have a system using Fujitsu's iCAD. I have built my own system using Fujitsu's iCAD. If you can tell us the advantages of using ROS in this context, I would appreciate your comments.
To view or add a comment, sign in
-
1. If you’re interested in #AI and #GenAI, you should follow Jim Fan . 2. Second order effects of AI are coming fast. Right into the home. #robotics 3. #OpenSourceAI is a force to reckon with.
NVIDIA Senior Research Manager & Lead of Embodied AI (GEAR Lab). Stanford Ph.D. Building Humanoid Robots and Physical AI. OpenAI's first intern. Sharing insights on the bleeding edge of AI.
We've found a cozy home for our robots in the world of bits! RoboCasa: a place where robot arms, dogs, and humanoids can train safely for daily tasks in procedurally generated simulations. RoboCasa uses LLMs, diffusion, and text-to-3D models to compose a diverse range of indoor environments and tasks. The release provides over 2,500 3D assets across 150+ object categories and dozens of interactable furniture and appliances. The more you randomize during training, the better your robots will learn and transfer from simulation to the real world! This work is led by Yuke Zhu's lab at UT Austin. I'm not part of the dev team, but plan to be the first customer! It's all open-source: https://robocasa.ai/ Paper in RSS 2024: https://lnkd.in/g7KtKT4s
To view or add a comment, sign in