well since robots have near limitless data storage, 24 scenes could be 24000000 scenes and web-linked learning between models passed around.

Would have been nice to see a video of this robot in action.

Similarly, your personal robot in the future will need the ability to generalize -- for example, to handle your particular set of dishes and put them in your particular dishwasher.
This is deduction, not induction. It's working from a general set of knowledge to learn specificity. Training mode would be inductive, and operating mode would be deductive.

What they still have is just symbol recognition. They don't recognize the objects at all.

If you show the robot different kinds of chairs, it will assign the symbol "chair" to a few common geometric and visual properties of chairs. The context of the chair is just an extension of this process - chairs are found near tables.

But it doesn't get "chair". If you put a large chair next to a small table, it will get them wrong. The "chairness" or "tableness" of these objects doesn't depend on how they are, but what they're used for, and since the machine only understands symbols and their relations, it has no idea about why a chair is a chair when you sit on it, but a table when you put your coffee mug on it. You may teach it this relationship as well, but take away the coffee mug and it's clueless again.

This is why I think the AI researchers are going at the problem the wrong way. They're not making intelligence, they're just making something that acts like it through clever programmig.

Would have been nice to see a video of this robot in action.

Have you seen the video on this page? I find it very impressive.
http://www.physor...ity.html