Tuesday 11 October 2016

Future Robotics will be defined by Validity

Can Robots be fooled or is that Illogical, Spock?

In the future, how will Robots think? What will they need to 'know'? How will they define data, will they see it as we do in characters or will they take it in as Binary Data?

What if they get it wrong?

The Robot has come a long way from the Tinplate Tin Box tyrants of popular post WW2 Japanese toy maker's idea of the 'artificial' human.

Wired for Binary - our brains are wired too, electronically
the computer needs to adopt the parallel processor to get transfer
speeds of data and scenario modelling to the level of our 'wetware'

The quest from time immemorial has been to make a Robot Humanoid 'in the image' of us Humans, as we are now, indeed, what may we morph into in the future if Project Avatar becomes the norm?

Technology moves on at a blinding pace, so will we get 'there' and make a humanoid replicant like ourselves before we as a species expire?

Do we design a lookalike version or do we take the bold step and build Human 2.0 or a Human 3.0 which would make us Humans of today look like a 1908 Model T Ford against a Human 3.0 Ford Mustang GT 500 equivalent?

So we either future proof to some extent as we design or just build a like for like unit comparable unit.


The shape of things to come?
The Human 2.0 of tomorrow, perhaps?

The Human accumulates data 'knowledge' as it ages and grows, so would our Humanoid just have a basic OS (operating system) and then pull data in as required from an Ethereal Web Connection, or would it have a massive data bank on board, pre-loaded to know just about everything?

The benefit of the Ethereal connection would mean that the Humanoid could become an 'instant expert' on anything and have correct telemetric data information so that it could perform the task right first time, anytime, without having to 'machine learn' the process as Humans do.

It could turn its hand to anything from sculpture to playing a musical instrument without the need to learn. It may develop or have developed its own 'machine learning' so it can invent or improvise, cybernetic controls to the algorithmic 'learning' process would need to be in place to prevent damage to the unit and to us from its own 'jazz'.

Neurons can be replaced by processor chips,
but an effective heat transfer system is required
otherwise the heat could damage the hardware

The definition of data and how the Humanoid Robot will perceive, translate and use it is another question to consider. How the acquisition of data is achieved, whether an ocular input to the processing unit can recognise objects and apply telemetry data for handling and awareness purposes visually as we do, or whether the sensors will  assess the solidity of the objects and perhaps the molecular makeup, prior to interaction with them.

Validity of data is the thing we have to resolve

The big question we finally have to test is that of Validity of the data that the unit will perceive or interpret. That includes data from the Internet it taps into. What if it does tap into something that is incorrect?

This could be a very dangerous situation if the unit picks up bad data and it causes a problem for humans. We have to know if the three Asimov laws will be programmed in, or not. Or if the unit is told to ignore those, will it do so and cause harm?

A spot of Babbage - the first programmable mechanical computer

Essentially, Human safety has to be the paramount concern, the danger of a rogue Robot is the reality in the film 'Westworld,' but a very real one in our case. Even a static production line Robot if the software becomes corrupted, can cause damage if the safeguards don't work.

How much capacity we build in for the machine to 'learn' is another situation. There is always the capacity for a malevolent 'mad scientist' type to create electronic mayhem.

This is the reason we need a workable Robot Manifesto, we need to work within the parameters, or Robot Anarchy will ensue. That could finish us as a species! 



No comments:

Post a Comment