Oh no, it is the third copy of the same video:
- original: https://hive.blog/hive-167922/@taskmaster4450le/twrppcng
- first duplicate: https://hive.blog/hive-167922/@taskmaster4450/muonygqe
- and now this is second duplicate
On the topic of humanoid robots - I don't think fully humanoid robots will be all that popular in the future. Only for sex dolls it makes sense (and even there I'm sure all kinds of bunnies, catgirls, elves, succubi and even lamias or mermaids are going to be on the menu). A robo-receptionist in the hotel only needs at most upper half to look like human. Robo-waitress, even if mostly human-like, might use electric rollerblades for efficient transportation. Similarily a robo-nurse, but on top of that might have more "hands". A robo-porter for working outside of buildings might use bigger wheels and there is really not much need for it to look like human, when it can do its work better as a box on wheels with one arm for loading/unloading packages. An automatic lawn mower is perfectly fine looking like today. Having a robot that can be repurposed for vastly different tasks is not really something that will be in high demand (because universal robot comes with a higher cost to manufacture and cost of lost productivity compared to specialized robot), and even if it was, humanoid is far from the most universal and efficient shape.
There are two big topics that AI addresses in the context of robots:
- navigation - being able to put a task-specialized robot in not standardized environment and expect it to be able to adapt and map its workspace on its own (f.e. the same model of a cleaning robot for hotels with different layouts, door handles, doorsteps, furniture and floor materials).
- way to parameterize its work functions (a way to learn work details) - now you have to program a robot for specific task. For quite some time there are robots that can replicate moves of a human worker, f.e. for spray-painting a car, but they don't adapt on their own - if the surface they are supposed to paint is not there, they will still paint the air. With sufficiently advanced AI you should be able to tell the robot to watch DIY video on youtube for it to learn how to achieve desired result.
One more thing, that might be developed on the fly, is sort of standard for sensors and "body map". I mean the multipurpose robot might have configurable "limbs" depending on task it is supposed to do. It would make sense for each appendage to come with its own "sensory cortex" to communicate in a standard way with main "brain", so when particular tool is connected, the robot does not have to learn from scratch how to move and use that tool.