HomeARTIFICIAL INTELLIGENCEResearchers Discuss The Future Of Robotics

Researchers Discuss The Future Of Robotics

Robotics: Intelligent robots that grip, manipulate, plan, and simulate – the world of technology is changing. At a specialist conference in Freiburg, researchers discuss potential solutions to problems and current projects.

There hardly seems to be an area that is not about to be penetrated by robots. While their use in industry is refined and expanded to new areas of application, robots are increasingly finding their way into agriculture, are being discussed as a solution for care, and could also revolutionize transport. This diversity is, of course, linked to numerous research areas that a group of international scientists is currently discussing at a specialist.

“Robotics: Science and Systems 2019 (RSS)” has an extended range of topics on the agenda for the three days of the congress. They started with overarching areas such as the visual perception of robots through their ability to grip gently, cognitive robotics, humanoids, animals, and special topics such as bio-inspired robotics. The RSS is intended to enable an intensive exchange between the professional world through lectures, exhibitions, and workshops.

Robotics: How Do Autonomous Robots Learn Their Behavior?

Scientists from the American Stanford University, for example, presented their current research project. They are looking for ways to train intelligent robots as efficiently as possible. Because, of course, even with a supposedly autonomous robot, it must first be established how it should behave in various situations – exactly as humans expect it to be. Interestingly, however, many people want robots to behave better than their own. For example, an autonomous vehicle may drive less aggressively than its owner would. The researchers are therefore currently testing a new training model. They combine real human demonstrations – so humans show the robot what to do – with questions that the robot asks to classify better the behavior seen.

Why bother when it is also possible to write a program that gives instructions? The challenge is to present precisely what a robot should do, especially when the task is complex. Often, the machine finds a faster way to achieve the specified goal. For example, program a robotic arm to grab a cylinder and hold it in the air. “I said the hand must be closed, the object must be more than X in height, and the hand should be at the same height.” “The robot rolled the cylinder object to the edge of the table, hurled it upwards, and then hit the air next to it with its fist.” So all conditions were met, and yet the programming was intended differently.

Robots Can Learn A Lot From Plants

It is also exciting to approach the growing robots, where robots learn movement – yes, activity – from plants. They throw seed pods through the air or shimmy from tree to tree like lianas in the tropical rainforest. While they are looking for a new hold and practice stretching out their feelers, they even overcome greater distances. Only when they have found a new tree does the stalk become soft. Others form veritable hooks at the top of their shoots.

The scientists of the EU project Robot, which is collecting ideas, therefore want to concentrate initially on climbing plants and lianas. The project manager Barbara Mazzola from the Italian Institute of Technology (IIT), brought her results from the Planetoid project, which she has overseen over the past few years. The created robot could dig itself into the ground – like the roots of a plant. He added new material and used sensors for touch, temperature, and touch, among other things. Such a technology could be used not only in agriculture but also in construction.

Robots Need Their Own Eyes

In Freiburg, it becomes clear where robotics is currently standing and how much background work is necessary to optimize robot functions. For example, an American research association has developed a new filter  that makes it easier to track 6D poses of objects in videos. 6D means that in addition to the three spatial dimensions, three angles of rotation are also included. If it is possible to make it easier for robots to track objects, this ability will help them, among other things, when they have to navigate or move objects in a room.

ALSO READ: What Does A Machine Learning Developer Do?

RELATED ARTICLES

LATEST ARTICLES