2024年11月4日
TeamViewer engineers are looking at how AI, AR, and robotics are changing our lives and work.
One of the most exciting things in tech right now is embodied artificial intelligence (EAI), like in humanoid robots. The long-term goal is to make them reason and act almost like humans.
At the same time, lots of non-humanoid robots are already used in industries worldwide. We spoke to TeamViewer engineers Eduardo Grifo and Theodore Tzirides to get their thoughts on how robotics, AI, and augmented reality (AR) shape the future.
TeamViewer engineers talk about the future of AI and robotics.
We’ll probably see more humanoid robots with complex movements and human-like interactions. For instance, Boston Dynamics Atlas: 10 years ago, we still saw it struggling to walk on uneven surfaces. Now, it’s even doing parkour.
Humanoid robots can be a great showcase for technologies like AI, natural language processing, and computer vision. But if we stop thinking about the humanoid form, we might see even more applications. They could be used in retail environments to restock shelves or in assisted living facilities, for example. The sky is the limit.
It’s an exciting time for AI and robotics, that’s for sure. Many technologies have reached a point where we can use them in real-world applications. That ’s why you’re starting to see Tesla and other companies applying what’s been researched over the last 20 years in robotics and AI. The big paradigm shift with large language models (LLMs) drives this wave. It’s making AI smarter. We may see some very exciting things, but we also need to consider the human factor.
Intelligent robots could make our lives easier by taking over some of the riskier jobs. This could also help with labor shortages, as we saw during the pandemic. They could also take on jobs that are less in demand, freeing up humans to focus on tasks that require more creativity or are simply more satisfying.
With climate change, we’ll unfortunately start to see more natural disasters. The good news is that robotics can also help there. For example, robotics can help mitigate risks in search and rescue operations during a natural disaster. We can do this by enabling police and firefighters to spot people with drones or similar robots.
AGI has been the holy grail of AI since the 1960s. But the current algorithms and methods aren’t getting us closer to this goal. They’re more of a narrow case of AI. Even LLMs that start to be very human-like don’t have many attributes of the human brain. They can’t reason very well. Putting all this together on a single robot will be difficult to achieve with our current technology.
General intelligence is what sets humans apart. We can learn new things and build on what we already know, using our knowledge in different ways. This is general intelligence. For example, you can train driving a Tesla and apply that to driving an SUV.
Our current algorithms cannot do that. They’re trained on a specific type of data — vast amounts of data — on the whole Internet, right? This helps them create interesting answers, but they cannot tackle new problems. To do this, we need AGI, or the current narrow AI, to work together with humans.
However, we’re seeing some kind of emerging intelligence where the bigger the model is, the more it creates answers that do not exist on the data it was trained on. So, there’s a silver lining there. But we haven’t found the limits of this attribute yet. Maybe we will find them in a year. Maybe we will never find them.
Maintenance tasks often involve repetitive tasks like inspecting dials or taking measurements. Having a robot do this could be a great solution. Companies like ANYbotics, with their ANYmal robot, focus on this exact use case.
There are also maintenance tasks that involve health risks for humans. For example, inspecting wind turbine defects 100 meters high with the risk of falling. Performing those tasks with an intelligent robot can be a great way to reduce those risks.
AR could be great for interacting with these robots. They would let users control them in a more personalized and natural way. While a tablet provides some depth perception, AR could be better for showing the robot’s surroundings and letting the operator control it more intuitively.
Thank you both for your time.
Sandro Cocca conducted the interview.
Find out about TeamViewer’s new AI capabilities.