Vsim ’s cover photo
Vsim

Vsim

Technology, Information and Internet

Reaching the impossible of physics simulations

About us

Vsim is a multi-physics simulation research and deployment company. Our team builds the best simulation engine that delivers accurate and scalable physics-based simulations in real-time. Our team develops novel and proprietary ML models that allow our partners to create and accelerate their simulations using Vsim as an execution engine. Our technology is used for VFX & animation studios, robotics teams or for reinforcement learning, to mention just a few of the applications.

Website
https://www.v-sim.co.uk
Industry
Technology, Information and Internet
Company size
2-10 employees
Headquarters
United Kingdom
Type
Privately Held
Founded
2022
Specialties
Multi-Physics Simulation Platform, VFX Engine, Reinforcement Learning Technology, Real-Time Animation, and Robotics Simulations

Locations

Employees at Vsim

Updates

  • View organization page for Vsim

    1,287 followers

    Vsim is delighted to announce that we have been awarded a multi-million pound grant from the Advanced Research + Invention Agency. Led by Programme Director Jenny Read, ARIA’s Robot Dexterity programme will accelerate the development of robot hardware to unlock the full potential of developments in machine learning and AI, creating vastly more capable and useful machines. Vsim is a technology company that develops proprietary simulation technology, ML frameworks and tools that provide order-of-magnitude performance improvements over existing solutions without compromising accuracy and generality. The team at Vsim are excited to work with the other ARIA Creators on this ambitious project to push forward Robot Dexterity. This ARIA programme has brought together a diverse group of experts to work together to create new breakthrough technologies. True dexterity relies on a combination of hardware and control software. Through the use of cutting-edge simulation technology, Vsim’s technology can accelerate the development of both hardware and control software. The team at Vsim will expand our technologies and tools to streamline the design and evaluation of novel manipulator hardware and enable ML and evolutionary strategies to be used to automate the process of optimizing designs. We can’t wait to work more closely with the amazing team that ARIA has assembled to produce impactful technologies that enable new dexterous capabilities!

    From developing a next-gen electronic skin for robots, to crafting novel tech for muscle-like actuation, meet the next seven teams we’re funding in Technical Area 1 of Robot Dexterity. 🦾 We’ll also see the emergence of new methodologies, including a design + manufacturing approach for tactile sensor systems and a framework for developing dexterous soft robotic manipulators. 🎯 Our goal? To realise the full potential of robots for transforming human safety, productivity, and prosperity. Discover the portfolio so far: https://lnkd.in/eBqkKQMP University of Cambridge MorphoAI Vsim University of Bath WAVEDRIVES LTD

    • No alternative text description for this image
  • View organization page for Vsim

    1,287 followers

    We have previously demonstrated locomotion and manipulation tasks trained with RL using custom reward/observation functions. We have also recently shown Vlearn training a Cartpole balancing task with vision in around 20 seconds. Last week, we showed multiple depth cameras within an environment in Vlab, highlighting the flexibility and performance of the camera simulation. However, Vsim’s technology doesn’t just stop at accelerating training. We are adding lots of functionalities to our robotic simulation platform - Vlab. In this video, we give a preview of Vlab’s authoring capabilities. Developers can use Vlab to help design their robots. They can adjust joint and link properties, place sensors, rig tendons, assign materials, etc., and then verify the correctness of their designs in simulation. #digitaltwin #robotics #AI #ML #RL #vsim #vlab #vlearn

  • View organization page for Vsim

    1,287 followers

    Last week, we showcased vision-based learning in Vlearn with the cartpole benchmark. This time, we are going to show multiple cameras and environments simulated using software emulated raytracing in Vlab. In this video, we have 61 environments in the scene. Each environment has four depth cameras, each rendered at 128x128 resolution. The cameras are positioned at different locations, with one attached to the robot’s wrists and three others observing the robot from different perspectives. The cameras were placed using the Vlab editor and the camera feeds along with other simulation properties can be recorded to HDF5 files to be used as synthetic data. #digitaltwin #robotics #AI #ML #RL #vsim #vlab #vlearn

  • View organization page for Vsim

    1,287 followers

    Happy New Year! The team at Vsim have been working on lots of new technologies over the past few months. In previous posts, we have shown extremely fast training with task-specific observation functions. In this post, we show our first vision-based policy trained with software emulated depth camera using ray tracing - Cartpole. In this video, we use a 80x80 resolution depth camera and train 512 parallel environments. The video we show is a real-time capture of the policy being trained on a single RTX 4090 using our ML framework Vlearn. The training takes around 42 seconds to complete 500 training epochs while rendering the environments and cameras, including start-up and shut-down time. However, a working policy appears within 10 seconds. Without render, we are able to train 500 epochs in around 26 seconds with 80x80 resolution cameras. 500 epochs take around 53s with 128x128 resolution cameras and just over 3 minutes with 256x256 cameras. If you are interested in trying our ML training framework out, please reach out to us on https://lnkd.in/ezxa8TE5 #digitaltwin #robotics #AI #ML #RL #vsim #vlearn

  • View organization page for Vsim

    1,287 followers

    At the beginning of this week, we showed a classic humanoid running policy. Today, we are going to show a H1 robot locomotion task. This policy trains the H1 robot to walk in an arbitrary direction following a target velocity and orientation. The policy is trained in Vlearn using 4096 environments and 1000 epochs. It took around 25 minutes to train using a 3080 laptop GPU and around 8 minutes using Desktop 4090 GPU. In the video below, we inferenced the policy in Vlab and commanded the H1 robots to follow a spline (highlighted in blue). Training graph for the 4090 is in the comments. #digitaltwin #robotics #AI #ML #RL #vsim #vlearn #vlab

  • Vsim reposted this

    View organization page for Vsim

    1,287 followers

    Last week, we showed manipulation tasks trained in Vlearn and inferenced in Vlab. This week, instead of manipulation tasks, we are going to show locomotion tasks. To start, we trained the classic humanoid running task in Vlearn and inferenced in Vlab. The fully converged policy trained over 1000 epochs takes approximately 7 minutes to train on a 3080 laptop and 3 minutes on a 4090 desktop GPU simulating 4096 humanoids concurrently. However, a stable running gait emerges during training within 2 minutes on the 3080 laptop and in less than a minute on the 4090. Training graph for the 4090 is in the comments. #digitaltwin #robotics #AI #ML #RL #simulation #vsim #vlearn #vlab

  • View organization page for Vsim

    1,287 followers

    Last week, we showed manipulation tasks trained in Vlearn and inferenced in Vlab. This week, instead of manipulation tasks, we are going to show locomotion tasks. To start, we trained the classic humanoid running task in Vlearn and inferenced in Vlab. The fully converged policy trained over 1000 epochs takes approximately 7 minutes to train on a 3080 laptop and 3 minutes on a 4090 desktop GPU simulating 4096 humanoids concurrently. However, a stable running gait emerges during training within 2 minutes on the 3080 laptop and in less than a minute on the 4090. Training graph for the 4090 is in the comments. #digitaltwin #robotics #AI #ML #RL #simulation #vsim #vlearn #vlab

  • View organization page for Vsim

    1,287 followers

    Last week, we showed hundreds of Franka panda arms screwing a nut onto a bolt using IK. Today, we are going to show another manipulation task. Instead of using IK, this demo uses a RL policy trained in our Vlearn system. We have trained the trifinger robot to push a cube to a target position. This policy takes around 5 minutes to train using a RTX 4090 desktop GPU and around 18 minutes to train using a RTX 3080 laptop GPU. The video shows the inferenced policy running in Vlab. Training graph for RTX 4090 in the comments. #digitaltwin #robotics #ML #RL #AI #simulation #vsim #vlearn #vlab

  • View organization page for Vsim

    1,287 followers

    Over the past few months, the Vsim team has been heads down building a RL training framework (Vlearn) and a robotics simulation platform (Vlab). We are really excited to show how far we’ve progressed with these two projects over the coming days. Today, we are going to show our first glimpse of large-scale, high-fidelity simulations in Vlab. In the video below, there are hundreds of Franka Panda arms performing a task of screwing a nut onto a bolt. We are using IK and state machines to control the motion of the arms. The nut, bolt and Franka arms are all using the render mesh/CAD model for collision. The interactions shown in this video are all achieved through contact and friction. This video is rendered directly from Vlab using path tracing. However, simulation and render of this scene is possible in real-time in Vlab using conventional rendering. #digitaltwin #robotics #simulation #RL #ML #AI #vlab #vsim #vlearn

  • Vsim reposted this

    View organization page for Vsim

    1,287 followers

    We are excited to announce our seed funding round led by EQT Ventures (Sandra Malmberg 👋, Ted Persson, Sai Sriramagiri, Naza Metghalchi).  We are so happy to partner with our other investors(Reece Chowdhry and Concept Ventures, Factorial Funds, Carles Reina, Warrick Shanly, Samsung Next, Tru Arrow Partners, Temasek, IQ Capital, Laura Modiano and Mehdi Ghissassi, Lakestar). This funding will allow Vsim to build up a world class team to help push the boundaries of robotics AI. I would also like to thank Ingrid Lunden for the article on Vsim’s funding round: https://lnkd.in/ge64uFQy Over the past few months, we have expanded our simulation platform(Vsim) to include features like RGB and depth cameras, sensors, animation system etc. Our ray-tracing camera system is specifically designed to acceleration vision-based learning by rendering massive numbers of views at up to 1m frames per second using a single RTX 4090. We are building a reinforcement training framework (Vlearn). Vlearn leverages Vsim to deliver order of magnitude training performance boosts compared with existing solutions. We are building a robotic platform (Vlab) on top of our simulation platform and Unreal Engine 5. Our robotic platform provides authoring capabilities for application to set up environments and robots, simulation and inference. We intend to expand functionality over the coming months. We are working on lots of ground-breaking technologies that we can’t wait to unveil. If you share our passion for technology and robotics AI and want to work on cutting edge technology in a talented, driven team, please contact us here: https://v-sim.co.uk/ We are looking to hire research engineers with experience in ML, simulation, tools, robotics control and manipulation.

Similar pages

Browse jobs